Guy Reams (00:00.782)
This is day 355, the one throat to choke fallacy. The first time I heard the phrase one throat to choke, I felt a strange sense of relief. Finally, a single owner, one name on the line, no confusion, no committees. You could almost hear the gears engage. Decisions clicked. The message stayed consistent. And in a regulated world, the paperwork marched in neat formation.
For a while, it works like magic. Then the work grows teeth. The single owner stops being a clean lane and starts being a narrow bridge. Every priority must cross that span. Every dependency queues up at the toll booth. Vacations become risky. Attrition becomes a threat. Knowledge collects in one place like rainwater pooling in a sag on the roof. The weight
The weight builds quietly until the first drip appears on the ceiling tile. I have lived this in program rooms and late night war rooms. The system looks orderly from far away. Up close, you see talented teams just waiting around. You see strong component owners who have been told to sit tight, who eventually stop trying to lead because their judgment is always overruled by that central funnel.
You see integration surprises because no one regularly exercises the seams. The seams were smoothed over by that single owner, and now the edges are dull from lack of use. The irony is that one throat to choke starts as a speed play. It is clarity. It is risk management. It is less coordination at the beginning. And all of that is true. It is also a tax you cannot escape as your system grows.
The tax shows up as burnout, as a single point of failure, as local optimization, as hidden integration risk. In the end, the same model that got you moving starts to hold you in place. So what do you do instead? And how do you solve it without complete chaos? Decentralization does not mean 1,000 opinions fired from the hip.
Guy Reams (02:19.203)
Decentralization means clear seams and strong contracts. It means stable interfaces that can evolve on purpose and not by accident. It means component teams that can move in parallel because they can trust the borders that they share. In the program enrolled, it might be versioned APIs with change budgets and honest deprecation windows. Automated contract tests that run all the time, not just before a release.
Shared observability where traces, logs, and metrics paint the same picture for everyone, so issues surface fast and do not hide inside tribal lore. I picture a platform and a product city, if you will. The platform team lays the streets, the curbs, the lighting. They own all the standards, the paved roads, and shared services that make the city safe.
Product and component teams build homes and shops along those roads. They decide their layouts and decorate their storefronts. They move quickly because the ground beneath them is firm, the foundation has been built. When a city must pivot, leadership pairs up across businesses and technology. Two or three in a box is less about redundancy and more about depth. One sees the customer, the other sees the system.
Together these people work together, they see the risks and trade-offs and neither must re-centralize everything in order to be heard. Governance in this type of model stays light. Automation does the heavy lifting. Architecture decision records capture why, not just what. Reviews are real conversation, not just status performances. The gates live in the
pipeline, so tests and service level objectives do the enforcement without another reoccurring meeting to drain all of our spirits. And because structure is only a hypothesis, we measure. We resist the urge to worship a dashboard, and we still measure. We keep a few dials we can actually read, and that make the most sense. We watch delivery.
Guy Reams (04:42.53)
How long does an idea take to reach production? How often do we deploy? How closely does planned scope match delivered scope? We watch reliability. What is a change failure rate? Meaning when they make a change, how often does it fail? How quickly do we restore a service when we do fail? Are we meeting all of our service level objectives for each component that a user can actually feel and interact with?
We watch health across teams. Where are the delays while one team waits on another team for delivery? How many incidents across team boundaries because the contracts are not as solid as we originally thought? How much rework traces back to interface changes that were not understood in the first place? We watch the people. Work in progress per team. On-call load and
maybe even something silly like pages per engineer. Cycle time variability. All of these types of things tell us whether the system is thriving or just simply surviving. Rollouts that stick do not start with org charts. They start with a baseline. Two to four weeks of measurement with no changes and a small set of metrics that match our goals.
We write thresholds as ranges, not as targets to discourage gaming the system. We make the metrics visible so that the conversation moves away from opinions. Then we tune structure on a steady cadence, not musical chairs, not whiplash reorganizations. We move ownership boundaries with intent, and we let the data lead us and speak for us.
When decisions must be made about centralization, we order the throat to choke, we ask a few plain questions and we write the answers down. How coupled are these parts? If a change in one area often breaks another, we centralize decisions for a time, we strengthen the contracts, then we loosen the grip again. Do we need to pivot across the system quickly?
Guy Reams (07:00.042)
If yes, we decentralize decision rights, not necessarily execution. What is the cost of failure compared to the cost of moving slower? If the failure cost is high, we tighten controls and plan change windows with care. Do we have leaders who can own the seams? If yes, we modularize and give them clear service level objectives. Do we have the tooling to support this?
If not, we centralize until we do have the tooling right. We score each answer low, medium, or high, something simple, along with reason. We decide on purpose. We revisit after two release cycles, and we let the evidence bend the structure as we need. In short, we might keep one accountable owner for outcomes that matter to the business, and we decentralize execution to the teams that own the parts end to end.
That hybrid pattern respects clarity and it unlocks speed at scale. We trade choke points for contracts, meetings for automation, vibes for metrics. We move slowly towards reliable measures, then we move the organization to match reality and not the other way around. The temptation to find one throat to choke never really leaves. It promises ease in a messy world.
But real progress looks different. It looks like shared responsibility with unmistakable boundaries. It looks like leadership that lifts, not hoards. It looks like learning loops that shorten with every cycle. When we practice that kind of work, we do not need a throat to choke. We have a team to trust.