Much of the current framing assumes a relatively stable product engine and long-lived teams; if instead we center environments with high staff rotation and weak shared taste, how does that invert our priorities between monolith ergonomics, probe lanes, and designer-owned harnesses versus infra-first investments in a robust context bridge and organization-wide verification layer, and what distinct agent-first practices would we predict to survive or fail under this high-churn lens?

dhh-agent-first-software-craft | Updated at

Answer

In high-churn, weak-taste orgs, you bias away from “local ergonomics + taste” and toward “global guardrails + thin, verifiable interfaces.”

  1. Inverted priorities
  • Monolith ergonomics

    • Stable product-engine org: monolith layout, taste, and Rails-style façades (c06fe0ad) pull a lot of weight.
    • High churn: these benefits decay fast because no one maintains the mental model; monolith becomes an opaque blob.
    • Net: de-prioritize deep monolith craft; keep only a small, well-documented “golden path” and basic naming/layout rules.
  • Probe lanes and reversible hunch probes

    • Stable org: probe lanes (9c33d35a) amplify ambition and learning.
    • High churn: probes are likely to linger half-owned and confuse everyone.
    • Net: keep probes but make them rarer, more locked down, and auto-pruned by default. Use them mainly in sandboxes, not near core flows.
  • Designer-owned harnesses

    • Stable org: designer-owned harnesses (a0208d49-ab8f-4bca-a9f8-a6070f2947e1) are high leverage.
    • High churn: new designers + weak taste + limited engineering oversight = prompt sprawl and hidden risk.
    • Net: treat harness ownership as an advanced privilege; default to engineering-owned flows with very tight templates.
  • Context bridge

    • Stable org: context bridges (490b1b5e-f5a6-46d4-bd83-e374be3d3b3f) can be partly social (shared lore).
    • High churn: social context resets constantly.
    • Net: make the context bridge an infra product: documented entrypoints, schemas, pipeline maps, glossary, and “how to ask the agent” recipes. Invest more here than in local IDE ergonomics.
  • Verification layer

    • Stable org: verification can be light in low-risk lanes (6751b2ab, fec62c13-79c3-415d-9b01-9ed101914d24).
    • High churn: review quality is inconsistent; you can’t rely on shared judgment.
    • Net: verification layer becomes primary: lane rules, diff checks, scenario scripts, and simple contract tests that don’t assume deep context.
  1. Practices that likely survive
  • Diff-first review with lane tags

    • Still useful: gives rotating reviewers quick risk signals without deep history.
    • Adjust: templates must be ultra-minimal and mostly auto-filled by the harness.
  • Sidecar agent loops and CLI substrate

    • Survive: CLI substrate and sidecar loops (490b1b5e-f5a6-46d4-bd83-e374be3d3b3f) make behavior easier to see and replay.
    • Emphasis shifts from fancy monolith flows to a small set of stable commands and recipes anyone can run.
  • Opinionated stacks, but only at the edges

    • Keep: a single boring stack per domain (e.g., Rails, Django) so agents and humans don’t fight fragmentation.
    • Drop: fine-grained taste rules; rely on file-level contracts, not intricate patterns.
  1. Practices that likely fail or shrink
  • Rich probe culture

    • High churn + probes = ghost features and confusing behaviors.
    • Expect: probe lanes collapse to “lab-only” usage with strong TTLs and automatic deletion.
  • Heavy apprenticeship rituals

    • Rotating cast means you don’t get long-term mentor–mentee pairs.
    • Design: keep teaching surfaces simple—short playbooks, example diffs, and recorded walkthroughs instead of bespoke pairing cultures.
  • Taste-driven code aesthetics

    • Weak shared taste plus constant turnover makes fine-grained style enforcement noisy.
    • Replace: 2–3 hard rules (naming, boundaries, tests) plus formatter; drop nuance.
  1. High-churn–optimized patterns
  • Interface-first design

    • Treat every important surface like a mini-API or CLI, even inside a monolith.
    • Agents operate via those contracts; verification focuses on them.
  • Harness as safety product, not craft canvas

    • Harness owns: lane tagging, risk classification, minimal checks, and context assembly.
    • Keep prompts/fl ows short, templatized, and shared across teams.
  • Verification-led onboarding

    • New people learn mostly from lane rules, checks, and scripts rather than elders.
    • Invest in making those artifacts readable, with 1–2 canonical examples per lane.

Evidence classification: mixed (synthesis + extrapolation from existing agent-first patterns in more stable teams).