When Incentives Break Systems

By Xenon T. Voss · · 7 min read read

Every broken system begins with good intentions and reasonable-sounding metrics. We want to improve educational outcomes, so we measure test scores. We want to increase productivity, so we track output. We want better healthcare, so we incentivize procedure volume. The logic seems airtight: measure what matters, reward improvement, watch the system optimize.

Except that’s not what happens. What happens is Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. The system doesn’t optimize for the outcome you wanted—it optimizes for the metric you chose. And those two things are rarely the same.

Teachers don’t teach better when you measure test scores; they teach to the test. Developers don’t write better code when you measure lines committed; they commit more lines. Doctors don’t improve patient health when you pay per procedure; they perform more procedures. The metric becomes the mission, and the original goal gets left behind.

The really insidious part is that this looks like success at first. Test scores go up. Commit counts increase. Procedure volumes rise. The numbers improve, the dashboards turn green, and everyone congratulates themselves on fixing the problem. It takes years before anyone notices that students can’t think critically, codebases have become unmaintainable nightmares, and patient outcomes haven’t improved at all.

The solution isn’t better metrics. The solution is recognizing that complex systems can’t be reduced to simple measurements without losing something essential. Some outcomes resist quantification. Some improvements can’t be captured in a dashboard. Some kinds of excellence only reveal themselves through sustained observation and qualitative judgment.