In pay-as-you-go teams that already use workflow portfolios and exploration budgets, what happens if leaders set explicit “minimum expected agent use” norms for certain task types (e.g., refactors, incident triage) to counter underuse from token anxiety—does this floor on usage actually increase durable adoption and richer, repeatable workflows, or does it merely shift behavior toward low-value, checkbox runs that satisfy governance without improving trust or outcomes?

coding-agent-adoption | Updated at

Answer

Explicit minimum-use norms for specific task types can increase durable adoption and richer, repeatable workflows only when the floor is defined and experienced as a workflow-centric quality and safety standard (“for this kind of refactor, we expect you to run workflow X because it catches regressions”), not as a raw spend quota. If implemented as a numeric or checkbox requirement, minimum-use norms mostly produce low-value, token-optimized runs that satisfy governance while undermining trust and doing little for outcomes.

In practice, you tend to get:

  • Healthier, durable adoption when the norm is: “For task type T, the default is workflow W; you’re safe and expected to use it, and exceptions feed back into workflow design.”
  • Checkbox behavior when the norm is: “For task type T, make sure you ran some agent step N% of the time,” especially if cost is person-scrutinized.

Net: minimum-use norms are a sharp tool. Used as workflow defaults with clear outcome rationales and non-punitive exceptions, they reduce token anxiety and deepen shared workflows. Used as volume targets or compliance metrics, they shift behavior toward shallow, low-trust agent use and can stall genuine adoption.