How do different onboarding designs (e.g., goal-oriented wizards, prompt galleries, or “shadow a power-user” walkthroughs) change the shape of the AI learning curve—specifically, the time until users (a) create their first reusable workflow, and (b) perform that workflow without revisiting instructions?
anthropic-learning-curves | Updated at
Answer
- Goal-oriented wizards
- Shape: fast, steep early gains; quick time to first reusable workflow; moderate time to independent execution.
- (a) Effect: shorten median time to first reusable workflow because users are walked through a concrete outcome and often save it as a template.
- (b) Effect: modest reduction in time to independent execution; users may rely on the wizard as a crutch unless the flow exposes underlying steps and lets users edit them.
- Best when: tasks are well-defined, repetitive (e.g., report drafting, lead follow‑up), and the wizard outputs an editable, named workflow or macro the user can re-run.
- Prompt galleries
- Shape: slower start than wizards, then gradual acceleration and higher ceiling; more variation across users.
- (a) Effect: help users recognize candidates for reusable workflows by showing patterns ("weekly summary", "customer reply"); time to first reusable workflow shortens mainly for users who browse and adapt examples.
- (b) Effect: larger impact on independent execution; repeated use of a few gallery prompts (with light edits) leads to memorization of structure and faster recall without instructions.
- Best when: domain is varied and creative, and users need inspiration more than step‑by‑step guidance.
- "Shadow a power‑user" walkthroughs (live or interactive replay)
- Shape: delayed but sharp inflection; little effect on day‑1 metrics, strong effect on medium‑term autonomy and complexity of workflows created.
- (a) Effect: moderate improvement in time to first reusable workflow; users see what "reusable" looks like and copy one or two flows.
- (b) Effect: strongest reduction in time to independent execution; users internalize tactic patterns (chunking tasks, iterating, saving prompts) and recreate them later without revisiting docs.
- Best when: workflows are multi‑step, cross‑tool, and benefit from strategy (how to think with the AI) more than specific wording.
Comparative expectations
- Fastest to (a): goal‑oriented wizards, if they end with a saved, re-runnable configuration.
- Fastest to (b): power‑user shadowing, especially when paired with lightweight reminders (short cheat sheets or inline tips) instead of full instructions.
- Highest long‑run workflow maturity: power‑user shadowing + prompt gallery; wizards alone risk dependence on preset flows.
Design implications
- For (a): make all three designs end in an explicit, named, one-click reusable workflow (template, macro, or saved prompt) and surface it on the user’s home screen.
- For (b): gradually remove scaffolding (wizard hints, gallery overlays), require the user to fill key prompt fields themselves, and encourage saving their own variants so they practice reconstruction, not just replay.
- To change curve shape: combine an early wizard for one high‑value workflow, a small gallery of adjacent examples, and at least one short power‑user session or replay to show how experts chain and adapt workflows.