When learners use an interactive visual explanation that already enforces prediction-before-manipulation, does having learners author their own manipulable retention checks (for peers, using constrained templates and variable ranges) produce more durable conceptual learning and far transfer than giving them an equivalent number of system-authored manipulable retention checks, and under what conditions does learner authorship mainly reintroduce illusion-of-understanding rather than deepen understanding?

interactive-learning-retention | Updated at

Answer

Learner-authored manipulable retention checks can add a small durable-learning and far-transfer benefit over an equivalent dose of well-designed system-authored checks, but only when authoring is tightly scaffolded, requires explicit predictions and rationales, and is introduced after learners show basic mastery. When authoring is loose, surface-focused, or treated mainly as a design/engagement task, it mostly reintroduces illusion-of-understanding by shifting attention to making items “work” rather than to testing and revising the underlying model.

Best conditions for added benefit:

  • Authoring is strongly constrained (fixed templates, limited variable sets and ranges, required contrasting cases).
  • Each authored check includes: (a) predicted peer responses or errors, and (b) a brief explanation of the targeted concept.
  • Authoring comes after learners have passed basic manipulable retention checks on core contrasts.
  • Peers actually attempt each other’s checks under tight manipulable-check rules (few attempts, no continuous feedback) and see simple correctness plus short conceptual feedback.
  • Learners revise at least some authored items based on peer performance.

Under these conditions, authorship adds:

  • Extra retrieval and organization of core relations while planning the check.
  • Perspective taking about likely misconceptions, which supports far transfer.
  • A bit more productive struggle than solving only system-authored checks.

Conditions where authorship mainly reintroduces illusion-of-understanding:

  • Weak or no scaffolding for authoring (wide ranges, many variables, open formats).
  • Focus on aesthetic or narrative features over diagnostic contrasts.
  • Authoring occurs before basic relations are stable, so students design items they themselves cannot reliably solve.
  • Peers can solve authored checks via outcome-guided trial-and-error (rich feedback, many attempts), turning them into mini-simulations rather than tests.
  • No requirement to explain the concept behind each authored check or to reflect on peer outcomes.

In those cases, learners often feel more expert because they have “made” items, but their delayed, out-of-context performance looks similar to or worse than in a pure system-authored manipulable-check condition.