Episode 63 — Choose performance metrics that drive the right behavior and accountability (3B3)
In this episode, we are going to tackle a deceptively simple question that shapes how organizations behave every day: what should we measure, and why. New learners often assume metrics are neutral, like a thermometer that simply reports the temperature. In real organizations, metrics are more like steering wheels, because what you measure influences what people prioritize, how they spend their time, and what they believe leadership truly cares about. If a team is told they will be judged mainly by speed, they will optimize for speed even when quality suffers. If they are judged mainly by cost cutting, they will reduce spending even when it increases risk or slows progress later. The goal of governance-aligned metrics is not to collect more numbers; it is to select a small set of measures that encourage the right behavior, create clear accountability, and connect everyday work to enterprise outcomes. By the end, you should be able to explain what makes a metric useful, how bad metrics create bad incentives, and how to build a measurement approach that supports honest decision-making rather than performance theater.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A performance metric is a defined way to measure progress or results, but the most important part is that it must reflect something the organization actually wants. That sounds obvious, yet many metrics exist because they are easy to count, not because they matter. Counting the number of tasks completed, the number of tickets closed, or the number of meetings held might be convenient, but those counts do not automatically represent value. A metric should be tied to a purpose, such as improving reliability, reducing waste, increasing customer satisfaction, or lowering exposure to risk. When metrics are chosen without a clear purpose, teams can hit the numbers while the enterprise still feels stuck. This is why governance emphasizes alignment, because alignment means the measurement reflects enterprise priorities, not just local team convenience. A strong metric answers the question, if this number improves, would the enterprise truly be better off in a way leaders care about.
The next idea is that metrics must be designed with human behavior in mind, because people respond to incentives even when nobody says the word incentive. If a metric becomes a target, people will shape their actions to meet it, sometimes in ways that defeat the original goal. For example, if the main metric is closing support tickets quickly, a team might close tickets prematurely or classify problems in a way that makes them look resolved. If the metric is delivering on time, teams might narrow scope or defer quality work that is harder to measure. This is not always malicious; it is often a natural response to pressure and limited time. Good metric selection anticipates these responses and chooses measures that are harder to game and more closely tied to real outcomes. A useful way to think about this is that a metric should encourage the behavior you want even when nobody is watching closely.
Another common misconception is that the best metrics are the most detailed metrics, as if more granularity automatically means more truth. In practice, too many metrics can create confusion and encourage people to chase numbers instead of improving systems. When every small activity has its own measure, teams spend time reporting rather than learning, and leaders struggle to see what matters most. A better approach is to choose a balanced set, where each metric has a clear role and the set as a whole tells a coherent story. You usually want a small number of outcome metrics that reflect enterprise value, supported by a few operational metrics that help explain how outcomes are being achieved. The outcome metrics help leaders judge whether the investment or service is delivering value, while the operational metrics help teams improve the process that produces the outcomes. This balance keeps measurement meaningful without turning it into bureaucracy.
To choose metrics that drive accountability, you also need to understand what accountability really means in governance. Accountability is not blame, and it is not punishment for bad news. Accountability means there is a clear owner for a result, the result is measured consistently, and the organization expects action when the result is off target. If metrics do not have owners, they become dashboard decorations that nobody feels responsible for improving. If owners do not have authority or influence, the metric becomes unfair, because the person held accountable cannot change the outcome. For beginners, it helps to ask a simple question for each metric: who can influence this number, and what actions would they take if the number moves in the wrong direction. If you cannot answer that, the metric is likely too vague, too distant from decision-making, or owned by nobody. Good governance chooses metrics that can drive action, because measurement without action is just reporting.
A practical way to think about metric quality is to look for clarity, consistency, and comparability. Clarity means everyone understands what the metric measures and what it does not measure. Consistency means it is measured the same way over time, so changes reflect real shifts, not changes in counting method. Comparability means you can use it to compare performance across time periods, across services, or across investments in a way that supports governance decisions. Many measurement problems come from unclear definitions, such as what counts as an incident, what counts as downtime, or what counts as a completed deliverable. If teams define these differently, the organization cannot see patterns or make fair comparisons. Governance encourages standard definitions so measurement becomes a shared language rather than a collection of local dialects. When definitions are stable, leaders can make better tradeoffs because the numbers mean the same thing from one report to the next.
Another critical distinction is between leading and lagging metrics, because the timing of a metric affects how it drives behavior. Lagging metrics measure final outcomes, such as revenue growth, reduced customer churn, or lowered breach impact. These metrics matter to leadership, but they often change slowly and are influenced by many factors. Leading metrics measure earlier signals that predict outcomes, such as patch coverage rates, time to detect abnormal activity, adoption of a new process, or reduction in repeat incidents. If you only use lagging metrics, teams might not know whether they are improving until it is too late to adjust. If you only use leading metrics, you can create a false sense of success because you are measuring activity rather than outcomes. Choosing the right combination means your measurement system supports both governance oversight and operational improvement. It also helps reduce stress and blame, because teams can show progress through leading indicators while the enterprise waits for lagging outcomes to appear.
Metric selection must also account for context, because the same metric can mean different things in different environments. For example, a short time to restore service might be excellent for one type of outage but might hide deeper reliability issues if incidents occur too frequently. A low cost per transaction might look good, but it could be achieved by cutting controls that increase risk. This is why good metric sets often include both efficiency and effectiveness measures, and sometimes a risk-related measure that prevents the organization from optimizing in the wrong direction. The goal is to discourage one-dimensional optimization, where teams chase a single number and accidentally harm the enterprise in other ways. A balanced set might include reliability, customer impact, cost, and risk posture, even if each is represented by only one or two key measures. When metrics are balanced, they encourage more thoughtful behavior because teams must improve without breaking something else.
You also want metrics that can be trusted, because trust is what makes measurement useful for governance. If the numbers are frequently disputed, leaders will stop using them to make decisions and will rely on instincts or politics instead. Trust depends on data quality, transparent calculation methods, and the ability to explain where the data comes from. For beginners, it helps to remember that measurement is not only about the number but also about the story of how the number was produced. If the metric is derived from unreliable inputs, people will spend all their time arguing about the metric rather than improving performance. That is a sign the metric is not governance-ready. When a metric is designed well, it becomes a shared reference point that reduces arguments and increases focus on improvement.
A subtle problem in metric design is choosing measures that reflect effort rather than results, because effort can look impressive while outcomes remain unchanged. Measuring hours worked, number of changes made, or volume of activity can reward being busy instead of being effective. Governance wants metrics that reflect value delivered, risk reduced, or capability improved. That does not mean effort metrics are always useless, but they belong in a supporting role, not as the main measures of success. If you measure effort as if it were value, you can create incentives for unnecessary complexity and constant motion. In contrast, if you measure outcomes and customer impact, teams are encouraged to simplify and focus on what matters. This aligns behavior with enterprise outcomes and helps ensure accountability is tied to meaningful results.
To make this concrete, consider a service that supports customer logins for an online platform. A poor metric choice might be the number of deployments per month, which could encourage frequent changes even when stability matters most. A better outcome metric might be successful login rate, with an operational metric like time to restore service after an incident, and a risk-aware metric like number of high severity authentication failures linked to known issues. These measures would push the organization toward reliable access, fast recovery, and reduced exposure, which are behaviors aligned with enterprise trust and revenue. The owner of these metrics might be the service leader who can influence incident response, change control, and reliability engineering practices. Reporting these metrics consistently would allow leadership to see whether investment is improving customer experience, not just whether teams are producing activity. Notice how the metrics encourage accountability without forcing the listener into tool details, because the focus is on outcomes and behaviors.
As we conclude, the big takeaway is that performance metrics are powerful because they shape attention, incentives, and accountability across the enterprise. Good governance chooses metrics that align with enterprise outcomes, are clearly defined and consistently measured, and encourage the behaviors leaders actually want. Strong metric sets balance leading and lagging indicators, include enough context to avoid gaming or one-dimensional optimization, and assign ownership to people who can influence results. When metrics are chosen thoughtfully, reporting becomes a tool for learning and decision-making instead of a scoreboard that people fear or manipulate. If you can explain why a metric exists, what behavior it encourages, and who is accountable for responding to it, you are already thinking like a governance professional, which is exactly what this domain is trying to build in you.