Don’t Fixate, Scan all the Metrics
Apr 05, 2026Written by Ed Cook and Roxanne Brown
A pilot flying on instruments in thick clouds has to rely on the panel of gauges in front of them to know where the airplane is in space. A look outside reveals only an off-white mass with no visible horizon. Airspeed. Altitude. Vertical speed. Heading, Attitude, and Turn-and-Slip make up the “six pack” of gauges that pilots have used for a hundred years. No single instrument gives the full picture. Should the pilot fixate on a single instrument, a phenomenon known in aviation as "channelized attention," they will miss the larger picture. The altimeter may be perfect. Meanwhile, the airspeed is getting dangerously slow. One instrument is not enough.
There is a common error in change management that operates on the same principle of ‘channelized attention’: if the survey scores are green, the change is going well. Metrics can be comforting. Surveys produce clean numbers. They can be charted, compared across time periods, and presented to executives in tidy dashboards, but a survey is only one instrument. Relying solely on surveys to gauge the health of a change is like flying an aircraft by watching only the altimeter.
To truly understand how a change is progressing, change leaders must cross-check as pilots do. For change leaders, we suggest three distinct tiers of metrics. This three-tiered approach has been woven through every step of our Data-Driven Change process. In this piece, we explain where it came from, why it works, and how to apply it.
Three Tiers, Not Two Types
The conventional framing of data in business is often binary: quantitative versus qualitative. This framing misses two important aspects of Data-Driven Change Management. First, the distinction is set up because people often confuse qualitative data with subjective data, and therefore, not useful. Qualitative data is not subjective. Qualitative data is still objective data like quantitative data. Second, the implication is that subjective data is not useful. It is useful. Subjective data just needs to be handled differently. A more useful classification recasts change data into three tiers.
Self-Reported data captures what people say about the change. Surveys, focus groups, interviews, Pulse checks, these are the instruments most Change Leaders look at first. Self-Reported data is subjective. That is not a criticism but a description. When a Change Leader asks someone in a survey about how they feel, such as, “Are you confident about how to raise risks for this change?”, they are asking for subjective data. The person answering (and only the person answering) can know how they feel. That is the definition of subjective, something that is particular to the subject, the person. Objective data is about the object and is separate from the person. A survey takes the subjective view of the person and converts it into objective data. It is still impossible to know how another person feels, but we can know what they marked on a survey.
That subjectivity is valuable. It is also limited. People may say what they think you want to hear. They may not yet have the language for what they are actually feeling. They may be genuinely optimistic in the survey and genuinely confused in practice. The survey captures perception. Perception is important, but it is not the whole picture.
Observable data captures what others can see. Are leaders showing up to training or delegating it to subordinates? Are teams asking questions in the collaboration channel, or is the channel silent? When the change is discussed in a meeting, do people lean in or lean back? Observable data is qualitative. It deals in degrees of engagement rather than precise counts, but it is still objective data because it is comparable between values. If three people in a room independently note that the operations manager has stopped engaging in the weekly change update, that observation does not need an exact count of comments made by the operations manager. The relative change is a behavioral fact that multiple witnesses can confirm.
Existing Company Metrics capture what the business already measures. Help desk call volume. Operational efficiency. Error rates. Attrition. Cycle times. These numbers exist whether or not a change is underway. They are objective and typically quantitative. They are also the metrics that senior executives already watch, which gives them a credibility that survey data and behavioral observations often lack in the rooms where decisions are made.
The power of this three-tiered approach is not in any single tier. It is in the fuller picture that results because of them. When all three tiers tell the same story, you have high confidence. When they disagree, you have something very valuable: a signal that something important is happening that should be investigated.
Hard-Learned Experience
The three-tiered approach did not emerge from academic theory. It emerged from a specific set of failures. The kind of failures that teach more than a dozen successes.
While well-established in her Change career, Roxanne was leading the change management work for a large program. It was complex but not a major enterprise effort. Yet it turned out to be politically charged. The Change work was reportedly going well. The surveys said so. The readiness scores were green.
Then an executive called in a panic.
A peer in another department had gone to the CEO and escalated an issue about the program, claiming it would cost millions more to implement than anyone had projected. The accusation was serious enough to threaten the entire effort. Roxanne was blindsided. Nothing in her data had signaled trouble. People reported feeling prepared. The readiness questions showed no red flags. The Self-Reported data told a story of a program on track.
But when Roxanne stepped back and looked at what she could observe, the behaviors, not the survey responses, a different picture emerged. The complaining executive had not been attending the program meetings. He had not been sending his people to the feedback sessions. His department's participation in the change activities had quietly declined over the weeks. The Observable data told the story that the surveys missed. The behavioral signals were there. Nobody had been watching for them. Moreover, this lack of Participation, one of the 10 Dimensions of Joy at Work, negatively impacted the culture. Whether it was a lack of invitation or a lack of willingness, PArticipation was not present, so Joy at Work was diminished.
The program ultimately succeeded. By most measures, it was a significant achievement. But at the end, when the results were in and the Change had landed, Roxanne encountered a second failure, one that was quieter but no less painful. The senior executives had no idea how much the change management effort had contributed to the program's success. They could see the operational results, but they could not connect those results to the work Roxanne and her team had done. The reason was straightforward: Roxanne had not tied the change effort to the Existing Company Metrics that the executives were already watching. She had measured the change in the typical language of change management (readiness, adoption, engagement), but she had not translated that work into the language the business already spoke.
Two failures from a single program. The first: trusting Self-Reported data as the complete picture and missing the behavioral signals that foretold a crisis. The second: succeeding in the change but failing to connect that success to the metrics the organization valued. The three-tiered framework was born from the need to prevent both types of failures.
The Power of Disagreement
Most people think of data triangulation as a way to confirm a finding. If three sources agree, the finding is solid. That is true, but it is not the most valuable application of the framework. The true insight for a Change Leader often comes when the three tiers disagree.
Consider this scenario. The Self-Reported data shows that people feel ready for the change, survey scores are high, and confidence is expressed in focus groups. But the Observable data indicates that managers in a critical department have stopped attending the training sessions, and the collaboration channel for the new process has gone quiet. Meanwhile, the Existing Company Metrics show something else - the error rates are climbing in the pilot group.
Any one of those data points in isolation could be explained away. The surveys are green, so we are fine. The managers are busy, and they will catch up. The error rates are normal for a transition period. But taken together, the three tiers tell a coherent story that no single tier could tell on its own: the Change is in trouble, and the people most affected either cannot or will not say so in a survey.
There is a human tendency to create a narrative thread through the data that will tell a comforting story. Green survey scores are a comfortable story. They are easy to present and easy to receive. But if the observable behaviors and the operational numbers are pointing in a different direction, the comfortable story is a dangerous one.
The reverse is also instructive. What if the surveys show anxiety, people report feeling unprepared, but the Observable data shows them actively experimenting with the new process, and the Existing Company Metrics show early performance improvements? That disagreement tells you something hopeful. The Change is landing, even though people do not feel confident about it, at least not yet. Their behavior is ahead of their self-assessment. In the context of our Change as Experienced model (Engage > Understand > Test & Learn > Adopt), they may be in the Test & Learn phase, where discomfort and progress coexist. A Change Leader who only looked at the survey would see a problem. A Change Leader who triangulated across all three tiers would see a team in exactly the right place.
Building the Minimum Set Across Three Tiers
In Not All Data Are Created Equal, we introduced the "minimum set" principle: select only the metrics that would drive a decision and discard the rest. That principle applies within each tier and across all three. You do not need a comprehensive survey, a detailed behavioral observation protocol, and a full operational dashboard. You need the fewest metrics from each tier that, taken together, would tell you whether the Change is progressing and where it is not.
For Self-Reported data, one or two well-formed survey questions, tested against the "useful versus interesting" standard, may be sufficient. Recall the test: if the data showed the highest or lowest possible value, would you take a different action? If not, the question is producing interesting but not useful data.
For Observable data, identify the two or three behaviors that would signal genuine adoption or genuine disengagement. Is leadership visibly participating? Are people using the new process when nobody is watching? Are the informal leaders, the hidden influencers you identified through network analysis, advocating or undermining?
For Existing Company Metrics, select the operational indicators that the change was designed to impact. If the change is a new customer service process, the metric might be first-call resolution rates. If it is a new manufacturing workflow, the metric might be defect rates or cycle time. These metrics have the advantage of already being collected and already being trusted by the executives who will make decisions based on your analysis.
The discipline is in the selection, not the volume. This selection best occurs through the judgment of the Change Leader. There is no machine-ready algorithm to do this.
What the Machine Cannot Do
A Large Language Model (LLM) can process any of these data streams with remarkable speed and precision. You can upload your survey data and ask it to find patterns, correlations, and outliers. You can give it your observational notes and ask it to categorize the behaviors. You can feed it operational metrics and ask it to identify trends. The analytical capability is there, and as we have argued throughout this series, it is now accessible to anyone willing to ask a plain-language question.
But there is something the LLM cannot do, and it is the act that matters most. It cannot hold all three tiers in its mind simultaneously and triangulate what they mean together in the specific cultural context of your organization. It does not know that the quiet operations manager who stopped attending meetings is the same person who was identified as a hidden influencer in your network analysis. It does not know that the executive who escalated to the CEO has a history of political maneuvering that predates the change. It does not know that the pilot group's rising error rates are not a sign of failure but a predictable consequence of a learning curve that your training plan was designed to address.
That contextual knowledge, the kind that lives in relationships, institutional memory, and professional judgment, is irreplaceably human. The machine does the calculating. The human leader does the thinking. And the thinking, in this case, means holding three different views of the same reality and synthesizing them into a picture that none could produce alone.
This is the human advantage. Not the ability to run a correlation or generate a chart. The ability to look at numbers that disagree and understand why. Getting this right is about more than just completing the Change. It is about the impact on the organization’s culture. Noticing that a group of influencers is not engaging and then making the invitation for them to do so is the organization’s half of the “invitation-willingness” exchange at the heart of growing Joy at Work. It is still up to those influencers to exhibit willingness to belong and willingness to be cohesive, but the Change Leader making the invitation is the first step. The Change Leader can have an enormous impact on the culture of the organization by noticing the subtle inconsistencies that the three-tiered metrics approach for Change can reveal and then acting on them. Even a small gesture can create an outsized opportunity to grow Joy at Work.
One Instrument Is Never Enough
The pilot who successfully navigated using instruments with zero visibility outside the cockpit was not the one with the best single gauge. They cross-checked continuously, and (in the language of pilots) had a good scan. They caught the moments when one instrument told a different story than the rest, and had the discipline to trust the synthesis over any single reading.
The Change Leader who succeeds with data will do the same. Survey scores are one instrument. Observable behaviors are another. Existing Company Metrics are a third. None is sufficient alone. Together, they form a picture of the Change as it actually is, not as the most convenient data source says it is.
Roxanne's experience shows us that the comfortable story is often the incomplete one. Green surveys do not mean the Change is without risk. Operational success that cannot be connected to the Change effort is a missed opportunity to demonstrate value. The three-tiered framework exists to prevent those failures, to ensure that the Change Leader has a good scan, and is not fixating on one gauge.
When the Change Leader presents the three tiers of metrics ( Self-Reported, Observable, and Existing Company Metrics) so that they tell a coherent and honest story, they have done more than measure a change. They have given the organization the information it needs to make a decision. And when those leaders act on that information, they send a signal to every person who contributed a survey response, who participated in a training, who tried a new process for the first time: your experience matters, your behavior was noticed, and the results of your effort are visible.
That signal is an invitation by the organization to its employees to Belong and to Trust, two of the 10 Dimensions of Joy at Work. That repeated invitation encourages the willingness to try again when the next change arrives. That is the mechanism that grows Joy at Work.