Episode 70 — Base improvement initiatives on performance results, not politics or anecdotes (Task 33)

In this episode, we focus on a habit that separates mature governance from noisy governance: choosing what to improve based on evidence, not on who has influence or who tells the most compelling story. Beginners often assume organizations naturally improve the things that matter most, because it seems logical that leaders would follow facts. In reality, organizations are made of people, and people are influenced by urgency, emotion, recent experiences, and the pressure to appear decisive. When performance is not measured consistently, improvement work can be driven by politics, meaning decisions shaped by power, preference, or negotiation rather than results. Improvement work can also be driven by anecdotes, meaning a single vivid incident or complaint becomes the main driver even when it is not representative. Governance aims to protect the enterprise from that drift by using performance results as the primary guide for improvement initiatives. By the end of this lesson, you should be able to explain why evidence-based improvement matters, how politics and anecdotes distort priorities, and how an enterprise can build a practical, fair way to choose improvements using performance data.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Performance results are measured indicators of how investments, processes, and services are actually performing over time. Results can include outcome measures like customer satisfaction, cycle time, incident impact, compliance rates, and realized benefits, as well as supporting measures like adoption, defect rates, and capacity constraints. The key point is that results reflect patterns, not just moments. A pattern tells you whether a problem is persistent and whether it is improving or worsening, while a moment might be a one-time event or a rare spike. When improvement initiatives are based on patterns, the enterprise targets the issues that consistently reduce value or increase risk. When improvement initiatives are based on anecdotes, the enterprise often chases symptoms, because an anecdote usually describes a single instance without context. This is why governance emphasizes consistent measurement and regular review, because those practices create the results that make fair prioritization possible. Without results, the enterprise is left with stories, and stories are powerful but not always true.

Politics distorts improvement choices in predictable ways, even when nobody intends harm. A leader might push for an initiative that benefits their department’s goals rather than the enterprise’s priorities. A team might resist improvements that would expose weaknesses or require uncomfortable change. Funding might flow toward visible projects that produce applause rather than toward less visible work that reduces risk or improves reliability. Politics can also appear as preference, where a decision maker favors a certain approach because it matches their personal experience. These forces can be subtle, and that is why an evidence-based approach is so valuable, because it gives the enterprise a neutral reference point. Performance results act like a shared scoreboard, not to shame anyone, but to clarify where the enterprise is losing value or taking on unnecessary risk. When results are the foundation, it becomes harder to justify improvement work that has little impact, and easier to defend work that is important but unpopular.

Anecdotes distort improvement choices because they feel real, especially when they involve stress, customer pain, or leadership embarrassment. If a high-profile outage occurs, an organization might rush to fix whatever seems connected, even if the true underlying causes are elsewhere. If a senior executive has a bad experience with a particular service, that experience can dominate the improvement agenda even if most customers are affected by different issues. Anecdotes can also cause overreaction to rare events, leading to investments that reduce a tiny risk while ignoring larger risks that occur more frequently. Evidence-based improvement does not ignore anecdotes, because anecdotes can be early warning signals, but it treats them as prompts to investigate rather than as final proof. The right response to an anecdote is to ask, is this representative, what do the trends show, and what measures can confirm the scope and impact. This turns anecdotes into useful inputs instead of drivers of misaligned priorities.

A practical approach to basing improvements on performance results starts with defining what the enterprise cares about and selecting metrics that reflect those priorities. If the enterprise cares about reliability, then reliability measures must be tracked and reviewed. If it cares about speed of delivery, then cycle time and throughput measures must exist. If it cares about reducing risk exposure, then risk-related indicators must be included in performance management. Once the metrics exist, improvement initiatives can be chosen by identifying gaps between current performance and target performance. This gap-based approach is powerful because it ties improvements to measurable needs rather than to preferences. It also helps the enterprise compare improvement options, because different initiatives can be evaluated by how much they are expected to close the gap and how quickly. Governance then becomes a system for allocating improvement effort where it will have the greatest effect on enterprise outcomes.

Evidence-based improvement also requires consistent interpretation of results, because numbers alone do not decide priorities. A performance measure might look acceptable on average but hide harmful variation, where customers sometimes experience extreme delays. Another measure might show improvement but still be far from the target needed to achieve enterprise goals. Leaders must learn to ask the right questions, such as whether the trend is stable, whether the measure is reliable, and whether the measure reflects what customers experience. This is why performance management should include both outcome measures and supporting operational measures, because the supporting measures help explain why outcomes are moving. For example, if incident impact is rising, supporting measures might reveal that change quality is declining or that monitoring coverage is insufficient. When leaders interpret results thoughtfully, they can choose improvement initiatives that address root causes rather than surface symptoms. That is the difference between improvement that sticks and improvement that fades.

Another vital element is establishing clear decision criteria for selecting improvement initiatives, because evidence-based does not mean random or purely mathematical. Criteria might include expected impact on enterprise outcomes, alignment to strategy, risk reduction potential, cost and effort required, dependency complexity, and time to value. The reason criteria matter is that performance gaps can be found in many places at once, and the enterprise cannot improve everything immediately. When criteria are agreed in advance, decisions become more transparent, and it becomes harder for politics to override results. A well-governed environment can still make judgment calls, but those calls are explained in terms of criteria rather than in terms of influence. This builds trust because stakeholders can see why a particular improvement was chosen even if they personally preferred another. Over time, transparency reduces conflict because people understand the rules of prioritization.

Basing improvements on performance results also means being willing to stop or change improvement initiatives when the results do not support continuing. Organizations sometimes launch an initiative with excitement, then keep funding it out of pride even when it is not producing measurable improvement. Evidence-based governance treats this as a normal part of management, not as a humiliation. If an initiative is not closing the performance gap, leaders should ask whether the initiative’s assumptions were wrong, whether execution is weak, or whether the metric is not capturing the intended outcome. The response might be to adjust the initiative, choose a different approach, or redirect resources to a more effective effort. This flexibility is crucial because it prevents wasted effort and keeps improvement work aligned with real needs. It also signals to teams that measurement is used for learning, which encourages honesty rather than hiding problems.

A common concern is that evidence-based improvement might ignore human realities, such as morale, stakeholder relationships, and customer perception. In reality, a good governance approach uses performance results to inform decisions while still recognizing that people experience problems emotionally. For example, even if an incident is rare, it might damage trust because it was highly visible. Evidence-based improvement can account for that by measuring customer impact and reputational risk, not by ignoring the event. The difference is that the enterprise still grounds the response in measurable impact and trend analysis rather than in panic. This keeps the organization from oscillating wildly based on the latest crisis. Over time, a calm, evidence-based approach improves resilience because the enterprise can respond to issues without sacrificing long-term priorities. Evidence is not cold; it is stabilizing.

To make this concrete, imagine an enterprise that hears frequent complaints that a particular internal system is slow, and a senior leader insists it must be replaced. An anecdote-driven response might immediately fund a replacement, because it sounds decisive. A performance-based approach would first examine service performance results, such as response time trends, error rates, peak usage patterns, and the business processes affected. It might reveal that the slowness occurs only during certain peak times, and that the root cause is a process that triggers unnecessary workload, or a capacity constraint that could be addressed more efficiently than a full replacement. The improvement initiative might then focus on reducing peak load drivers, improving process efficiency, and scaling capacity in a targeted way, with measures to confirm improvement. If performance results then show sustained improvement, the enterprise avoids unnecessary replacement cost while still solving the real problem. That is the practical payoff of using results instead of stories: better outcomes with less wasted effort.

As we conclude, basing improvement initiatives on performance results means using measured reality as the primary guide for where to invest attention, money, and effort. It reduces the influence of politics by making decisions transparent and criteria-driven, and it reduces the influence of anecdotes by focusing on patterns, trends, and measurable impact. This approach requires consistent performance management, clear decision criteria, thoughtful interpretation of metrics, and the willingness to adjust or stop initiatives when results do not support continuing. When an enterprise builds this habit, improvement work becomes less reactive and more strategic, which increases the chance that gains will be real and sustained. Governance becomes credible because it is seen as fair and evidence-based, not as a battleground of preferences. If you remember one guiding principle, let it be that the best improvement decisions are the ones you can explain using performance results, because those explanations keep the enterprise aligned to truth rather than to politics.

Episode 70 — Base improvement initiatives on performance results, not politics or anecdotes (Task 33)
Broadcast by