When AI Answers the Wrong Question Perfectly

Published Date
AI is very good at answering questions. That is both its strength and its risk.
When organizations talk about using AI, they often focus on speed, accuracy, and scale. AI can process more data, faster, than any person. It can forecast, rank options, and generate recommendations with impressive precision. All of that assumes something important: that the question being asked is the right one.
AI does not decide what matters. It does not decide which tradeoffs are worth making. It does not decide how the business actually works. It takes the question it is given and answers it as well as it can.
If the question is poorly framed, AI will still answer it. And it will do so confidently.
This is where business acumen becomes more important, not less. Someone still has to decide what problem is being solved. Someone has to understand how a decision connects to revenue, cost, cash, risk, and longer-term consequences. Without that context, AI can deliver very good answers to very bad questions.
This shows up in subtle ways. A system improves margin without noticing the effect on cash. A recommendation improves local efficiency while creating problems elsewhere. A forecast looks right on paper but ignores how customers or suppliers actually behave. These are not failures of analysis. They are failures of framing.
I was reminded of this by an earlier experience with Economic Value Added. Years ago, I worked with a company that adopted EVA as a decision framework. Managers were taught how to make EVA-positive decisions. What they were not taught, at least not well, was where to find the data that fed those calculations or how to read it once they did.
In one case, decisions began to pile up on a manager's desk because she could not, and would not, make them. The decision rule existed. The understanding did not. That is my recollection, not a formal study, but it stayed with me.
AI can create a different version of the same problem. Instead of too little information, there is too much. Instead of no answers, there are many. But without shared understanding of how the business works, those answers still fail to support confident decisions.
AI assumes clarity. Organizations often do not have it.
In many cases, the hardest part of a decision is not choosing among options. It is deciding what to pay attention to in the first place. That work has always required judgment. AI does not replace it. It exposes it.
As AI systems are embedded more deeply into business processes, the cost of poor framing increases. Decisions move faster. Effects spread further. Local assumptions scale across the organization. When those assumptions are wrong, the impact is larger and harder to unwind.
This does not mean AI is misguided or dangerous. It means it is powerful in a very specific way. AI is strong at execution once direction is set. It is far less helpful at deciding what direction should be.
That responsibility still sits with people. It depends on understanding how the business makes money, where constraints actually lie, and which tradeoffs matter in the current context. That is business acumen. It is not replaced by AI. It becomes more visible because of it.
When AI answers the wrong question perfectly, the problem is not the answer. It is the question. Fixing that problem still requires something AI does not provide: judgment.
