Current AI learning-curve models mostly treat prompt skill acquisition as something that happens inside the AI product. If instead we treat prompt skill as a broader work practice learned through social channels (e.g., Slack snippets, internal wikis, peer demos), under what conditions do these off-product sharing networks accelerate workflow maturity more than in-product onboarding—and how would incorporating signals from these networks (such as referenced prompts, shared examples, or wiki-linked workflows) change which users we label as key “teachers” versus “learners” and which in-product plateaus we treat as benign versus risky?

anthropic-learning-curves | Updated at

Answer

Off‑product sharing networks beat in‑product onboarding when social learning is dense, artifacts are easy to copy into the product, and governance allows adaptation. Under those conditions, teacher/learner labels and plateau interpretation both need to move from per-user, in-product events to network- and asset-centered signals.

Conditions where off-product sharing accelerates workflow maturity more than in-product onboarding

  1. High-density social channels
  • Teams use active, searchable spaces (Slack, wikis, shared docs) where prompts/workflows are posted, remixed, and referenced.
  • Threads show back-and-forth improvement, not just broadcasts.
  1. Easy translation into the product
  • Copy-pasting or linking a shared prompt into the AI tool is low-friction (no reformat, no policy blocks).
  • Shared examples are structured (sections, variables) so they map cleanly to workflows/templates.
  1. Visible performance feedback
  • People share “before/after” artifacts or time-saved notes with the prompt, not just the text itself.
  • Others reference those examples in similar tasks within days/weeks.
  1. Moderate, not rigid, governance
  • Policies allow local edits to shared snippets; there is no mandate to use them verbatim.
  • Teams are encouraged to post both wins and failure cases.
  1. Overlapping work and repeated tasks
  • Many people do similar tasks (e.g., support replies, sales outreach, weekly reports), so a single shared pattern has many reusers.
  • Off-product assets are reused across projects or clients, not only once.
  1. Weak or generic in-product onboarding
  • Built-in tours and examples stay high-level (generic help, chat tips) and don’t cover the org’s specific workflows.
  • Product doesn’t surface strong internal exemplars early or contextually.

Under these conditions, off-product networks often:

  • Shorten time from first try to first reusable workflow.
  • Spread higher-quality patterns faster than default onboarding.
  • Push users toward team-standard formats and cadences (higher workflow maturity) even if in-product behavior looks “shallow.”

How network signals change who is a “teacher” vs “learner” Move from “who edits most in-product” to “whose artifacts propagate and get referenced.”

Users to label as teachers

  • Authors of prompts/workflows that:
    • Are frequently pasted or linked into the product by many distinct users.
    • Reappear in multiple wiki pages, Slack threads, or internal SOPs.
    • Trigger visible improvements (lower corrections, faster cycles) after adoption.
  • Curators who:
    • Collect scattered snippets into canonical wiki pages or libraries.
    • Maintain “known good” patterns that teams link back to.
  • Local adaptors who:
    • Regularly post small improvements or domain variants that others then copy.

Users to label as learners

  • High reusers / low originators:
    • Run many workflows or pasted prompts that are traceable to a few shared examples.
    • Rarely introduce new patterns that get picked up by others.
  • Network-peripheral users:
    • Rarely share AI snippets in social channels.
    • Consume but don’t modify wiki-linked workflows.

Key signal types to incorporate

  • Reference counts: how often a prompt/workflow ID or canonical snippet URL appears in Slack/wikis and then in-product.
  • Fan-out: number of distinct reusers per canonical pattern.
  • Pathways: typical sequence “Slack/wiki link → first in-product run → first local variant.”

How network-aware signals change plateau interpretation

Currently, products often treat:

  • Flat run counts + low editing as a plateau or risk.

With network signals, distinguish four cases:

  1. Benign plateau: mature, socially maintained pattern
  • In-product:
    • Stable workflow runs, low editing.
  • Off-product:
    • Workflow linked from active wiki pages/SOPs.
    • Recent Slack threads point to it as the recommended path.
    • Small, periodic updates by a few recognized teachers.
  • Interpretation: high workflow maturity; low edits reflect trust and standardization.
  • Action: monitor corrections and rare failures; don’t push generic “edit more” coaching.
  1. Risky plateau: socially dead or orphaned pattern
  • In-product:
    • Stable or declining runs, low editing.
  • Off-product:
    • Few or no recent references to the workflow or its snippet.
    • People discuss workarounds or alternate tools but don’t update the canonical workflow.
  • Interpretation: workflow is drifting out of the living practice; usage may persist only due to inertia.
  • Action: nudge owners/teachers to review or retire; prompt learners to adopt newer shared patterns.
  1. Latent growth: learning happening off-product
  • In-product:
    • Runs appear flat; minor edits at best.
  • Off-product:
    • Active Slack threads where people critique and refine prompts.
    • Multiple variants circulating in wikis, but not yet captured as formal workflows.
  • Interpretation: the AI learning curve is steep but mostly invisible in telemetry.
  • Action: surface “import from shared snippet” and “convert your shared prompt to a workflow” flows; invite active sharers to formalize.
  1. Hidden risk: over-reliance on a single social teacher
  • In-product:
    • Many runs of a pattern with minimal edits.
  • Off-product:
    • Most references trace back to one user’s snippet; few others post alternatives.
  • Interpretation: social over-centralization; skills and resilience depend on one teacher.
  • Action: identify apprentices, encourage co-ownership, and create small, sanctioned variants.

Implications for product design

  • Add fields for “source” when creating workflows (Slack link, wiki page) and track which sources generate durable assets.
  • Build lightweight “from snippet to workflow” flows triggered when users paste long, structured prompts.
  • Add analytics views that rank teachers by downstream reuse and performance, not by raw edit counts.
  • Treat low-edit, high-reuse workflows with strong social references as mature; treat similar patterns without references as potential stagnation.

Evidence type: mixed Evidence strength: mixed

Assumptions

  • Off-product channels are used heavily enough that they capture a large share of real prompt learning.
  • Copy-paste and link data can be logged or approximated without breaking privacy/compliance.
  • Social sharing quality is at least moderately correlated with workflow quality.
  • Users are willing to adapt shared prompts, not only copy them verbatim.

Competing hypothesis

  • Prompt skill and workflow maturity are driven mainly by hands-on, in-product experience; off-product sharing mostly spreads shallow tips. In this view, in-product onboarding and edit behavior remain the best indicators of learning, and network signals add noise more than insight.

Main failure case / boundary condition

  • In tightly regulated or security-sensitive orgs where Slack/wikis cannot include real prompts or data, off-product networks stay thin or sanitized. There, network-aware modeling adds little value, and most meaningful learning and workflow evolution do occur inside the governed AI product.