Episode 69 — Establish performance management across investments, processes, and services consistently (Task 32)

In this episode, we are going to take the idea of measurement one step further and talk about performance management, which is what happens when measurement is used to steer real decisions over time. Beginners often hear performance management and imagine a strict system for judging people, like a workplace scorecard. In governance, performance management is primarily about managing outcomes and capabilities, not about punishing individuals. It is a consistent way to define what good performance looks like, measure it in comparable ways, review it regularly, and adjust priorities and actions based on what the evidence shows. The word consistently matters because enterprises often measure different things in different places using different definitions, and then wonder why leadership cannot see the real picture. Establishing performance management across investments, processes, and services means building a shared measurement language and a shared rhythm of review so the enterprise can compare, learn, and improve without constant debate about what the numbers mean.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Performance management begins with clarity about what is being managed, because investments, processes, and services are related but not identical. Investments are the funded commitments the enterprise makes to create or improve capabilities, and they have expected benefits, timelines, and risk profiles. Processes are the repeatable ways work gets done, and they influence efficiency, quality, and control. Services are the ongoing ways the enterprise delivers value to internal users or external customers, and they have performance patterns over time. A consistent performance management approach ties these together by showing how investments change processes and services, and how process and service performance reveal whether investments are producing value. Without that linkage, investments can be judged by delivery alone, processes can be ignored until they break, and services can be blamed for problems that are actually caused by weak governance choices. Consistency does not mean every area uses the exact same metric, but it does mean the enterprise uses compatible categories, stable definitions, and a consistent way to interpret performance.

A core part of consistency is selecting a small set of standard performance dimensions that apply across the enterprise, even when the specific metrics differ. For example, leaders often care about value delivered, cost efficiency, risk exposure, reliability, customer impact, and time to deliver improvements. One service might measure reliability using availability and incident severity, while another might measure it using error rates and latency, but both can still roll up into a consistent reliability dimension. The same idea applies to processes, where efficiency might be measured as cycle time or rework rate, and to investments, where value might be measured as realized benefits versus planned benefits. Consistent dimensions allow leadership to compare across very different domains without pretending everything is identical. They also make reporting less confusing because stakeholders can learn a stable vocabulary that repeats over time. When everyone uses the same categories, discussions shift from arguing about labels to deciding what to do.

Consistency also requires standard definitions, because a metric that is defined differently in different places cannot support governance decisions. If one team counts an incident as any user complaint, while another counts an incident only when a system is down, leadership cannot compare incident trends across services. If one investment reports benefits as projected value while another reports benefits as realized value, a portfolio review will be distorted. Standard definitions do not have to be perfect, but they must be shared, documented, and applied in the same way over time. This is where many organizations struggle because definitions feel tedious, yet without them the enterprise lives in measurement confusion. A good performance management system treats definitions as part of governance infrastructure, like a shared map. Once the map is consistent, people can navigate decisions more quickly because they are not constantly re-learning what metrics mean.

Another element of consistent performance management is establishing a review cadence that matches the decision needs of the enterprise. If performance is reviewed too rarely, problems grow unnoticed and opportunities are missed. If performance is reviewed too often, the organization becomes reactive and noisy, constantly chasing small fluctuations. A healthy cadence often includes regular operational reviews for service and process health, and periodic portfolio-level reviews for investments and strategic alignment. The important point is that reviews must be connected to decision rights, meaning the people in the review must have the authority to act on what the evidence shows. If reviews are only presentations with no decisions, they become ritual and people stop taking them seriously. Consistent performance management creates an expectation that each review ends with actions, such as adjusting priorities, assigning improvement work, or changing funding. This is how measurement becomes management rather than reporting.

Consistency across investments also means that investment performance should be tracked beyond delivery, because governance is about outcomes over time. That requires a consistent approach to benefit realization measurement, including baselines, targets, and time frames, and it requires owners who remain accountable after launch. It also requires consistent ways to represent investment health, such as whether delivery is on track, whether benefits are emerging, whether risks are increasing, and whether costs are trending above plan. When investments are tracked consistently, leadership can make portfolio decisions based on evidence, such as rebalancing funding toward investments producing strong value. It also allows the enterprise to learn patterns, such as which types of investments tend to run over budget or which types deliver benefits reliably. Learning is a key output of performance management because it improves future decisions. Without consistent tracking, the enterprise forgets lessons and repeats mistakes.

Consistency across processes means measuring how work is performed, not just what outputs are produced. Processes shape quality, efficiency, and control, and inconsistent process measurement creates blind spots where inefficiency becomes normal. For example, if one department measures cycle time and rework while another department does not, leaders may not notice where waste is accumulating. A consistent performance management approach identifies core processes that matter to enterprise outcomes and applies stable measures such as cycle time, error rate, compliance rate, or backlog age. It also encourages process owners to respond to measures with improvements, such as simplifying handoffs or clarifying decision points. Importantly, process measures must be used to improve systems, not to shame people for problems that the process design created. When process performance is managed consistently, the enterprise can reduce friction that slows down both technology delivery and business operations.

Consistency across services is about treating services as ongoing commitments with performance expectations, not as incidental byproducts of projects. Services should have clear performance indicators tied to customer impact, reliability, responsiveness, and risk. Consistency means that services are measured using a shared approach so leadership can see which services are healthy and which are becoming fragile. It also means services are reviewed with the same seriousness regardless of which team operates them, because enterprise value depends on the whole service landscape. A common problem is that some services get strong measurement because they are visible, while others remain unmeasured until a major incident forces attention. Consistent service performance management prevents that unevenness by applying baseline expectations and review rhythms to all critical services. This reduces surprises and helps the enterprise allocate improvement resources more rationally.

A major reason consistency is difficult is that organizations often build measurement systems locally, where each team chooses what is easy and what matches their tools or habits. Local measurement is not wrong, but without a governance layer, local choices create a patchwork of incompatible metrics. The role of governance is to set the minimum consistency requirements, such as common definitions, common dimensions, and common reporting formats, while allowing local flexibility for details. The goal is not to eliminate local context; it is to ensure local measures can roll up into enterprise understanding. This is similar to how different classrooms can teach the same curriculum in their own style, but still use the same grading standards so results are comparable. When governance provides that shared structure, teams can still measure what matters for their work, but the enterprise can also see a coherent picture. That coherence is what enables portfolio-level decisions that are fair and evidence-based.

To make this concrete, imagine an enterprise that has several active investments aimed at improving customer experience, along with processes for order fulfillment and services for online account access. A consistent performance management approach would define shared dimensions like customer impact, reliability, cost, and risk, and then select metrics for each area that map to those dimensions. Investments would be tracked on planned versus realized benefits and on whether enabling changes are adopted. Processes would be measured for cycle time and error rate, since those affect customer experience directly. Services would be measured for availability and incident impact, since outages and failures drive customer dissatisfaction. Reviews would occur on a cadence where operational teams can act quickly on service issues, while leadership can make periodic portfolio decisions about where to invest further. Over time, the enterprise would see whether investments are actually improving the processes and services that customers experience, and it could adjust funding based on real performance trends. That is consistent performance management in action: the same language, the same rhythm, and the same expectation of action.

As we close, establishing performance management across investments, processes, and services consistently means building a shared measurement language and shared review rhythm so the enterprise can learn and steer decisions based on evidence. Consistency comes from shared performance dimensions, standard definitions, comparable reporting, and reviews that lead to real actions. It also requires tracking investments beyond delivery, measuring processes as systems of work, and managing services as ongoing commitments with clear performance expectations. When performance management is consistent, governance becomes more effective because leaders can compare, prioritize, and improve without constantly reinterpreting what metrics mean. The enterprise stops relying on anecdotes and starts relying on patterns, which is exactly how mature governance supports better outcomes. If you remember one idea, let it be that consistent measurement is not about collecting more data, it is about making performance visible in a way that enables fair decisions and sustained improvement.

Episode 69 — Establish performance management across investments, processes, and services consistently (Task 32)
Broadcast by