Across mixed-format workplace programs that synchronize solo unguided attempts with small-group worked examples, does allocating a fixed share of attempts to “transfer-only” items (novel problems that deliberately break surface patterns of prior practice) improve transfer of skills and reduce illusions of learning more than using the same number of attempts on near‑clone items, particularly for learners who show strong performance on AI‑supported practice but weak performance on low‑support tests?

ai-learning-overreliance | Updated at

Answer

Likely yes, but with small-to-moderate effects and only if transfer-only items are genuinely novel, low in extraneous load, and paired with clear feedback.

For mixed-format programs that already synchronize solo unguided attempts with small-group worked examples, allocating a fixed share of attempts to well-designed “transfer-only” items is plausibly better than using those attempts on near-clone items at:

  • slightly improving transfer of skills, and
  • more clearly exposing illusions of learning, especially for learners whose AI-supported practice scores are high but low-support test scores are weak.

Expected pattern:

  • High AI-reliant / overconfident learners: Most likely to benefit; transfer-only items reveal dependence on surface cues and AI, making gaps salient in both solo and group phases.
  • Moderate learners: Some benefit; extra variety gives more chances to abstract underlying rules, with minor risk of confusion.
  • Very low-skill learners: Gains are smaller and fragile; transfer-only items can feel random or discouraging unless difficulty is controlled and worked examples explicitly connect old and new cases.

Net: A fixed quota of transfer-only items (e.g., 15–30% of attempts) is a reasonable, theory-consistent upgrade over pure near-clone drilling for transfer and calibration in these synchronized systems, but not a magic bullet and still largely untested in this specific configuration.