Click for joy!
A close-up, shallow depth of field photograph taken in a dimly lit office meeting room at dusk. A human hand, wearing a gold wedding band and a watch, hovers with a pen over a paper document titled

The AI Calculator and the Human Choice

analytics change leader practioner May 10, 2026

Written by Ed Cook and Roxanne Brown

We opened this series with the image of a driver jumping into a car without a destination. Going straight to data analysis without a question or hypothesis wastes time and produces motion but not progress. That was true when the driver was doing the work manually. AI has added a twist. Now, the car drives itself. The steering is precise, the acceleration is smoother, and the dashboard displays are beautiful. But if no one enters a destination, the autonomous car does the same thing the manual car did. It gets you to the wrong place. It just gets you there with less effort.

We established in If You Can Conceive of It, You Can Calculate It that LLMs like Claude and Gemini have eliminated the computational barrier to analytics. You no longer need an MBA in statistics to do sophisticated analysis. What was once a semester of coursework is now a prompt.

But capability without methodology is not magic. It is a potential pit into which your hard work is lost because AI makes bad analysis faster.

A bad question asked and, without AI, analyzed might waste a week. The analyst would build a rough spreadsheet, struggle with the formatting, and present something imperfect enough to invite scrutiny. Someone in the room would squint at the sheet  and say, "Wait, what were we trying to answer?" The imperfection was a safeguard. It slowed the process enough for the humans to catch up.

A bad question asked through an LLM wastes an afternoon. And it produces a polished, confident-looking deliverable that is complete with charts, trend lines, and executive-ready formatting. It has become harder to question precisely because it looks professional. The analysis arrives with the visual authority of competence. The conclusions are stated with certainty. Everything about the output signals that someone thought carefully about this. But no one did. The machine just calculated. No one did any thinking.

In Visualizing the Change, we built the visual instruments (heat maps, box-and-whisker charts, combo charts) that guide a leader's eye to the decision. Those instruments only work if the data behind them was chosen, collected, and analyzed with intention. This post is about what happens when the AI makes it easy to skip that intention, and why humans are more important now than ever.

Calculating vs. Thinking

Consider what happens when someone hands an LLM a dataset without a question. The prompt reads: "Here is our change readiness survey data. Analyze it." The LLM will comply. It is, after all, a prediction engine. It predicts the next useful token in a sequence, and the sequence it has been trained on includes thousands of examples of what an "analysis" looks like. So it produces one. It finds correlations. It builds clusters. It generates trend lines. It writes a narrative. The output is structured, articulate, and delivered with the confidence of a consultant who bills by the hour.

Somewhere in the analysis, the LLM finds that employees who attended more than three training sessions report higher change readiness scores. It presents this as an insight. It may even generate a chart with a clean upward slope. A leader looking at that chart sees a clear story: More training produces more readiness. The conclusion practically writes itself: Increase the training sessions.

But no one asked whether attendance was voluntary or mandatory. If voluntary, the finding reflects self-selection. The people who attended more sessions were already more engaged. The training did not cause the readiness. The readiness caused the attendance. If mandatory, the finding might reflect compliance rather than genuine adoption. People checked the box and answered the survey favorably because that is what was expected of them. In both cases, "increase the training sessions" is the wrong prescription. The LLM did not ask these questions because it can’t. It does not know the context. It does not know the organization. It calculates. It does not think.

This is the Interpreter's Advantage we introduced in The Numbers Do Not Speak for Themselves. An LLM can process your three tiers of metrics, Self-Reported, Observable, and Existing Company Metrics, in seconds. It can cross-reference them, flag discrepancies, and produce a summary with recommendations. But it does not know that the quiet manager in operations publicly doubted the change at a town hall last week. It does not know that the high satisfaction scores from the Northeast came during a week when the regional VP personally walked the floor. It does not know that the executive sponsor's enthusiasm for the project has quietly cooled since a budget review in March. The contextual knowledge that lives in institutional memory and professional judgment is irreplaceably human. The machine does the calculating. The human leader must do the thinking.

We explored Taleb's Narrative Fallacy in that earlier post, the human tendency to construct a satisfying story from incomplete evidence. An LLM does not correct for the Narrative Fallacy. It amplifies it. An LLM will produce a coherent, well-structured narrative built from whatever data you provide, regardless of whether that data was the right data, collected at the right time, interpreted with the right context. The Narrative Fallacy used to require human effort to construct. Someone had to sit with a spreadsheet, notice a pattern, and build a story around it. That effort was itself a speed limit. It gave colleagues time to question, challenge, and redirect. Now the narrative can be generated in thirty seconds. The story arrives faster than the scrutiny.

The methodology this series has built, Choose, Collect, Analyze, Present, is the discipline that prevents the machine from building a beautiful story that happens to be wrong. The hypothesis comes first. The minimum set keeps the data focused. The timing keeps it relevant. The interpretation keeps it human. Without these steps, the LLM is the autonomous car without a destination, and the faster it drives, the further it goes from where you ought to be.

Analytics as an Act of Respect

When you choose the minimum set of metrics, you are deciding that the people you survey deserve to have their time respected. Every unnecessary survey question is a withdrawal from the trust between the organization and the people doing the work. When you Collect the data at the right moment, you are ensuring their input can still influence the outcome rather than merely documenting what already happened. When you Analyze with the Interpreter's Advantage, you are insisting that people are more than their data points, that the number on the spreadsheet represents a human being in a specific situation with a specific history. When you Present through visual wayfinding, you are giving leaders the clarity they need to act rather than defer.

This is one of our Core Four Philosophies in practice: Leading change intentionally is simply a gesture of respect. The methodology is how that respect becomes operational. Every shortcut, every survey sent without a hypothesis, every dashboard built from unexamined data, every AI-generated analysis accepted without interpretation is a small withdrawal from the trust between the organization and its people. 

Consider the origin of the three-tiered metrics framework that has run through this series. Roxanne did not develop Self-Reported, Observable, and Existing Company Metrics because it was academically interesting. She developed it because she lived through the consequences of relying solely on surveys. The readiness scores showed green. The surveys said the change was on track. Then an executive called in a panic because a peer had escalated a multimillion-dollar concern to the CEO, and nothing in the survey data had signaled the trouble. The Observable data (the declining attendance and the quiet withdrawal of participation) told the story that the surveys missed. 

That experience taught Roxanne something that became foundational to our approach. Relying exclusively on surveys was not just analytically incomplete; it was burdening people with questions the organization had no plan to act on. The three-tiered approach was itself a gesture of respect.  It was a way to measure the change without treating every employee as a data extraction point. 

The Invitation to Joy

An LLM can hand you a network map and show you exactly who is isolated on the periphery. It can surface the clusters, calculate the betweenness centrality, and identify the hidden influencers who connect groups that would otherwise be siloed. The computation is remarkable. It would have taken a team of skilled analysts hours to produce what the LLM delivers in minutes.

But the AI cannot make the invitation to bring someone in. It cannot extend trust. It cannot offer belonging. It cannot create the conditions where someone feels safe enough to commit to a change they did not choose.

There is a difference between calculation and leadership.

Everything in this series, the hypothesis that gave your inquiry direction, the minimum set that kept you honest, the timing that kept you relevant, the interpretation that kept you human, the visualizations that kept you clear, all of it leads to this very important moment. The moment when a leader looks at the data, sees a person behind the number, and chooses to act because the leader understands what the number means and cares about the person it represents.

Joy at Work is a practice and the result of how you lead. Every one of the 10 Dimensions of Joy at Work begins with a willingness, a willingness that requires an invitation. The LLM can show you where the invitation is needed so the leader can make it.

That is what this series has been about: Better decisions, made by better-prepared leaders, in service of the people those decisions affect. In order for change to happen, the leader must change first. A leader who follows the methodology is equipped to do something no algorithm can. Leading change in a way that respects the people going through it. To make work part of a life well-lived. Joy at Work.