Too many programs “paved the cow paths”—bolting AI onto processes and systems
never built for agility—so integrations were brittle and technical debt
compounded. Human-Centered Design (HCD) was often a check-the-box exercise focused on known problems rather than a true re-imagination of work with AI.
Teams applied conventional “design-to-build” approaches to a frontier technology. Proofs of Concept (POCs) multiplied, but without staged pathways to production, most remained demos—satisfying curiosity, not operations.
Even where models were good, data quality and access were uneven; Model Operations (MLOps)—including Continuous Integration/Delivery/Testing (CI/CD/CT)—was underdeveloped, so reliability, governance, and lifecycle management lagged.
“Plug-and-play” promises met integration and change-management reality; cost overran timelines, and Return on Investment (ROI) proved slower than expected.
Scarce talent (data science, engineering, product) sat in silos; change management was underfunded; executive sponsorship waxed and waned—keeping AI peripheral to the operating model.
In regulated domains, ambiguity around privacy, fairness, transparency, and accountability slowed deployment. Evaluations (evals) and policy guardrails arrived late, not by design.
Net effect on the three vectors:
- Velocity (how fast ideas become safe production) stalled.
- Capacity (how much change the org can run in parallel) stayed low.
- Capability (what people know and can do with AI, every day) did not compound.