In teams that already use workflow portfolios and shadow catalogs, what happens if governance introduces an explicit “exploration floor”—a minimum percentage of portfolio budget that must be spent on tagged experimental or high-cost workflows each quarter—does this requirement (a) accelerate promotion of new cost-visible agent workflows into the shared catalog, or (b) mainly trigger superficial ‘check-the-box’ exploratory runs that erode trust in portfolio budgeting?
coding-agent-adoption | Updated at
Answer
An explicit exploration floor in portfolio-budgeted teams with shadow catalogs tends to modestly accelerate promotion of new cost-visible agent workflows into the shared catalog, but only when the floor is paired with outcome- and workflow-centric rules. If implemented as a blunt quota, it mainly drives superficial ‘check-the-box’ runs and erodes trust in budgeting.
More precise answer:
-
Net effect on promotion into the shared catalog:
- Accelerates promotion when:
- The floor is tracked at the workflow-portfolio level (e.g., “incident-triage family”) and tied to clear criteria for review and promotion (e.g., N real runs with outcome tags, not just ‘any experimental spend’).
- Shadow-catalog entries that consume exploration budget are visible in the same dashboards and reviews as golden workflows, so exploration spend naturally feeds the promotion queue.
- Exploration runs must carry intent and outcome tags ("root cause found", "regression caught"), making it possible to distinguish real experiments from synthetic quota-burning.
- Governance explicitly protects failed but serious experiments (“you don’t lose budget or reputation for trying something ambitious that didn’t pan out”).
- Has little or negative effect when:
- The floor is defined and enforced as "spend X% on anything labeled exploratory" without requirements on realism, outcomes, or link to workflow families.
- Shadow workflows don’t have a simple promotion path and are not first-class in portfolio reviews; squads then treat the floor as an accounting requirement, not a learning mechanism.
- Exploration spend is scored at the squad level (“you didn’t hit your exploration target”) rather than interpreted at the workflow level (“did we learn anything about this workflow family?”).
- Accelerates promotion when:
-
Risk of superficial ‘check-the-box’ behavior and trust erosion:
- Superficial usage becomes likely when:
- The exploration floor is time-boxed to each quarter with use-it-or-lose-it logic and no carryover, incentivizing teams to burn budget late in the period.
- Reviews focus on "did you hit the percentage?" rather than "what did we learn and did we promote or retire anything?".
- Leaders implicitly reward high exploration spend as a proxy for innovation, pushing teams to generate noise runs.
- Trust in portfolio budgeting erodes when developers see:
- Obvious quota-burn runs (toy or trivial tasks) being counted as exploration to satisfy governance.
- Exploration spend later being used as a pretext for cuts or blame, contradicting the stated purpose of the floor.
- Superficial usage becomes likely when:
Summary: In teams already using workflow portfolios and shadow catalogs, an exploration floor helps promotion and durable adoption only if it is designed as a workflow-learning constraint (forcing real bets on experimental/high-cost workflows with clear review paths) rather than as a numeric spend target. Poorly designed floors mostly produce cosmetic exploration and weaken trust in the budgeting model.