Current AI learning-curve models largely treat prompt skill acquisition and workflow maturity as properties of individual users or teams within a single product. If instead we model AI use as a cross-ecosystem practice—where people learn patterns in consumer tools and carry them into governed enterprise systems—under what conditions does relying on in-product behavior alone systematically misclassify users (e.g., labeling experienced “shadow” users as novices), and how would incorporating signals from adjacent tools or bring-your-own-prompts change onboarding design and our interpretation of shallow prompting versus durable high-value workflows?

anthropic-learning-curves | Updated at

Answer

Relying only on one product’s telemetry misclassifies users when prior skill and workflows live in other tools, are expressed as pasted prompts, or are suppressed by governance.

Conditions where single-product data mislabels experienced users as novices

  • Tool mismatch
    • User has heavy history in consumer/chat tools; enterprise tool is new, so early use looks shallow and low-edit.
    • Enterprise UX pushes one-click flows and hides free-form input, so skilled users can’t express patterns.
  • Governance and policy
    • Org forbids custom prompts or external accounts; experienced users comply in-product but still use shadow tools off-channel.
    • Users paste long, pre-authored prompts but rarely edit in-session, so metrics see “low editing, many first-time prompts.”
  • Cross-tool orchestration
    • Real workflow spans docs, tickets, CRM, and another LLM; only one step is instrumented.
    • User runs key steps elsewhere, then pastes summaries; product logs a few simple prompts around them.
  • Asset portability
    • Users bring mature prompts/snippets via paste or files; product treats them as first attempts, not imports.

How cross-ecosystem signals change classification and onboarding

  • Use import- and paste-patterns as skill signals
    • Long, structured prompts repeatedly pasted or imported → treat as “bring-your-own-workflow,” not novice use.
    • Onboarding: offer “convert this prompt to a reusable workflow” rather than basic prompt tips.
  • Detect external cadence
    • Regular bursts of similar tasks tied to calendar or other tools → assume existing workflow maturity.
    • Onboarding: focus on mapping and parameterizing flows, not explaining basics.
  • Interpret shallow prompts as exploration vs ignorance
    • If user sometimes runs complex imported prompts, then uses shallow ones around them → treat shallow prompts as exploration.
    • If no history of complex patterns across tools → shallow prompts more likely novice behavior.
  • Adapt governance-aware paths
    • Where editing is restricted, low-edit signals reflect policy, not skill.
    • Onboarding: target power users and admins for workflow editing; give end users usage tips, not prompt-creation training.

Net effect

  • Without cross-tool and bring-your-own-prompt signals, products over-target many experienced “shadow” users with novice onboarding and misread shallow prompting as low skill.
  • Incorporating these signals shifts focus to: converting imported prompts into reusable workflows, mapping cross-tool cadences, and distinguishing healthy exploration from genuine immaturity.