When users first achieve independent execution on a reusable workflow, which specific combinations of early design choices—like defaulting to fully auto-run vs. review-first modes, exposing intermediate steps vs. hiding them, or requiring explicit parameter naming vs. using inferred defaults—most reliably determine whether their AI learning curve progresses toward (a) broad prompt skill acquisition that transfers across tasks, or (b) narrow, workflow-specific proficiency that later stalls, and how large are the measurable differences in downstream workflow maturity and reuse breadth between these paths?

anthropic-learning-curves | Updated at

Answer

Auto-run + hidden steps + heavy inferred defaults tends to produce fast but narrow, workflow-bound proficiency that often stalls. Review-first + visible steps + light, explicit parameter naming tends to produce slower initial progress but broader prompt skill that transfers. The downstream gap is moderate but meaningful: 1.3–2× more distinct workflows per user and broader reuse (more input types and destinations) in the reflective path, versus higher run volume but low diversity in the auto/opaque path, based on mixed product evidence and analogs from automation and UI studies.