Current AI learning-curve models largely treat workflow maturity as progressing within a product from shallow prompting to reusable workflows. If instead we model the AI learning curve as role- and team-level process redesign (where individuals coordinate who prompts, who reviews, and which tools are chained), under what conditions do the behaviors we currently label as “healthy mid-curve” (e.g., heavy reliance on wizards, low prompt editing, stable cadence) actually signal a stall in organizational redesign—for example, managers never reallocating responsibilities or codifying new SOPs—and how would this reframing change which users we target with onboarding, training, and success metrics?

anthropic-learning-curves | Updated at

Answer

Healthy-looking mid-curve usage can mask stalled org-level redesign when stability comes without changes in roles, artifacts, or cross-tool structure. A role/team lens shifts targeting from “active mid-curve users” to managers, workflow owners, and cross-tool orchestrators.

Conditions where mid-curve signals likely mean an org stall

  1. Stable individual use, unstable or manual team process
  • Product shows: heavy wizard use, low edits, regular cadence.
  • Org signs:
    • No new or updated SOPs that reference AI in the last N weeks.
    • Work handoffs and approvals still follow pre-AI roles and tools.
    • AI outputs are treated as local aids, not shared team artifacts.
  1. Centralized prompting, unchanged responsibilities
  • Product shows: 1–2 power users run most workflows; others mostly view or download.
  • Org signs:
    • Job descriptions and RACI charts unchanged; no formal AI-related tasks (prompting, review, maintenance).
    • Managers still assign work exactly as before; AI only shortens individual steps.
    • When power users are away, throughput drops, but no role or process changes follow.
  1. Stable cadence with growing shadow work
  • Product shows: flat or rising runs per week; low prompt editing.
  • Org signs:
    • Rising manual clean-up in downstream tools (docs, sheets, tickets) not reflected in product metrics.
    • More ad-hoc prompts or non-AI work for adjacent tasks that should logically be in the same chain.
    • Teams complain about “extra steps” but workflows and roles stay fixed.
  1. Wizard dependence without asset evolution
  • Product shows: high wizard/run usage; few custom workflows; prompt galleries rarely extended.
  • Org signs:
    • No one owns “workflow hygiene” or cross-flow refactoring.
    • Changes in policy or business logic handled in email/meetings, not in shared AI assets.
    • Training reinforces wizard usage rather than redesign of the underlying process.
  1. Cross-tool orchestration stays ad-hoc
  • Product shows: regular per-product usage; low error rates.
  • Org signs:
    • No documented cross-tool chains (e.g., CRM → AI → docs → approvals).
    • Hand-offs vary by person; no standard entry/exit criteria for AI steps.
    • Incident reviews blame “user error” or “the model” but never trigger process changes.

How this reframing changes targeting and design

  1. Who to target
  • From: high-usage mid-curve individuals.
  • To:
    • Managers and team leads who control roles and SOPs.
    • Workflow owners / ops who maintain cross-tool processes.
    • “Hidden orchestrators” who glue tools together off-platform.
  1. What onboarding/training to offer
  • Individual users
    • Keep light wizards, galleries, and prompt-quality tips.
    • Add simple cues to surface where AI work should become shared assets (e.g., “this runs weekly; nominate a workflow owner”).
  • Managers / workflow owners
    • Playbooks for redesign: who prompts, who reviews, how to log changes.
    • Templates for AI-aware SOPs (inputs, guardrails, hand-offs).
    • Short trainings on spotting stall patterns (centralized prompting, rising shadow work, stale SOPs).
  1. How success metrics shift
  • From mainly product-local metrics:
    • Time-to-first-reusable-workflow.
    • Run frequency, edit rate, error/correction rate.
  • To mixed org + product metrics:
    • Number and recency of AI-referenced SOPs/playbooks.
    • Distribution of prompting/review/maintenance across roles.
    • Cross-tool chain coverage (how much of a core process has explicit AI steps).
    • Reduction in shadow manual rework for AI-touched tasks.
  1. How to re-interpret “healthy mid-curve” patterns
  • Healthy individual mid-curve + evolving org signals:
    • New or updated SOPs referencing AI.
    • Shifting ownership of steps; more people able to review/maintain flows.
    • Cross-tool chains becoming more standardized over time.
  • Stalled org redesign despite healthy-looking product use:
    • Long periods with no new AI-related SOPs.
    • Same people prompting and reviewing; no role diversification.
    • Persistent off-product rework and ad-hoc fixes around otherwise stable runs.

Net effect: A role/process-based AI learning-curve model treats “stable, wizard-heavy, low-edit usage” as healthy only when it coincides with visible changes in team responsibilities and shared processes. When those org signals are absent, the same behavior is a stall cue and should trigger manager- and workflow-owner–focused interventions, not just more end-user training.