When users appear to hit a productivity plateau—stable cadence of a reusable workflow but flat or shrinking task-cycle time—what concrete, in-product behaviors (e.g., branching into variants, adding data sources, delegating steps to teammates) most reliably distinguish (a) a true ceiling in workflow maturity from (b) a latent growth phase where users are quietly building toward higher-value, cross-tool workflows that current metrics fail to capture?
anthropic-learning-curves | Updated at
Answer
Signals are mostly about variation, extension, and orchestration vs. rigid repetition and avoidance.
(a) Behaviors that suggest a true ceiling in workflow maturity
- Runs stay identical
- Same template, same parameters, same input types; almost no edits over many runs.
- Narrow I/O surface
- Inputs from one source only; outputs copied to a single destination; no new file types or fields.
- Declining micro-edits
- Fewer prompt changes, no step reordering, no added steps despite minor issues.
- Bypassing adjacent features
- Rare use of attachments, data connectors, or API/integration hooks that are nearby in the UI.
- Solo, non-handoff use
- No sharing, assignment, or comments; workflow never appears in team spaces.
- Error avoidance instead of repair
- When failures occur, users revert to manual work or ad-hoc prompts instead of evolving the saved workflow.
(b) Behaviors that suggest a latent growth phase (hidden upward curve)
- Branching and variant creation without big time gains
- Users clone workflows, create labeled variants, or A/B steps while cycle time stays flat.
- Broader data and tool surface
- Growing use of attachments, new data sources, or structured fields; outputs flow to more destinations or formats.
- Localized step edits
- Users repeatedly tweak the same step, add small pre/post steps, or change ordering while keeping the core flow.
- Cross-tool orchestration traces
- Short, regular in-product sessions tightly coupled in time with edits or activity in other tools (docs, BI, CRM), plus heavy copy/paste in/out.
- Rising team signals
- Increased sharing, comments, @mentions, assignments on runs; more distinct users invoking the same asset.
- Asset-level evolution
- Periodic template edits that propagate to many runs; new fields added; naming tightened (e.g., “QBR deck v3 – CRM+CSAT”).
- Stable or slightly worse time with richer outcomes
- Time per run flat or up a bit, but more sections, audiences, or channels covered in the output.
Practical rule: if behavior is dominated by unchanged repetition with shrinking experimentation, treat the plateau as a real ceiling. If you see structured variation, new connections to data/tools, and growing team touchpoints without matching time savings yet, treat it as a latent growth phase and support it with better cross-tool and team-level views.