chatsimple

When AI Answers the Wrong Question Perfectly

When AI Answers the Wrong Question Perfectly

Eliza Helweg-Larsen

Co-founder, Chief Creative Officer, Andromeda Simulations International

Published Date

January 9, 2026

AI is very good at answering questions. That is both its strength and its risk.

When organizations talk about using AI, they often focus on speed, accuracy, and scale. AI can process more data, faster, than any person. It can forecast, rank options, and generate recommendations with impressive precision. All of that assumes something important: that the question being asked is the right one.

AI does not decide what matters. It does not decide which tradeoffs are worth making. It does not decide how the business actually works. It takes the question it is given and answers it as well as it can.

If the question is poorly framed, AI will still answer it. And it will do so confidently.

This is where business acumen becomes more important, not less. Someone still has to decide what problem is being solved. Someone has to understand how a decision connects to revenue, cost, cash, risk, and longer-term consequences. Without that context, AI can deliver very good answers to very bad questions.

Example 1: Hiring at Amazon

Amazon experimented with an AI system to help screen job candidates using historical hiring data.

The question AI answered well:
Who looks most like candidates we hired successfully in the past?

The question it did not ask:
Do those past hiring patterns reflect the talent we want going forward?


The system learned existing patterns accurately, including bias, and was eventually abandoned.


This shows up in subtle ways. A system improves margin without noticing the effect on cash. A recommendation improves local efficiency while creating problems elsewhere. A forecast looks right on paper but ignores how customers or suppliers actually behave. These are not failures of analysis. They are failures of framing.

I was reminded of this by an earlier experience with Economic Value Added. Years ago, I worked with a company that adopted EVA as a decision framework. Managers were taught how to make EVA-positive decisions. What they were not taught, at least not well, was where to find the data that fed those calculations or how to read it once they did.

In one case, decisions began to pile up on a manager's desk because she could not, and would not, make them. The decision rule existed. The understanding did not. That is my recollection, not a formal study, but it stayed with me.

AI can create a different version of the same problem. Instead of too little information, there is too much. Instead of no answers, there are many. But without shared understanding of how the business works, those answers still fail to support confident decisions.

AI assumes clarity. Organizations often do not have it.

In many cases, the hardest part of a decision is not choosing among options. It is deciding what to pay attention to in the first place. That work has always required judgment. AI does not replace it. It exposes it.

Example 2: Pricing and Promotion at Target and Other Large Retailers

Retailers have used AI models to optimize pricing and promotions based on past sales data.

The question AI answered well:
What pricing patterns maximized sales under previous conditions?

The question it did not ask:
How will customers behave when conditions change?


When supply chains shifted and inflation rose, models built on stable history produced confident but unreliable recommendations.


As AI systems are embedded more deeply into business processes, the cost of poor framing increases. Decisions move faster. Effects spread further. Local assumptions scale across the organization. When those assumptions are wrong, the impact is larger and harder to unwind.

Example 3: Zillow Offers and Automated Pricing

Zillow Offers was the home-buying arm of Zillow, created to buy and resell homes using automated pricing models based on recent market data.

What the models optimized:
Estimated home values using historical sales and trends.

What they did not address:
How pricing errors would compound when applied at scale and under rapidly changing market conditions.


When housing markets shifted, small pricing gaps added up across thousands of purchases. Zillow shut down the Zillow Offers business in 2021 after recording significant losses, while the rest of the company continued to operate.

Based on public reporting about Zillow Offers.


This does not mean AI is misguided or dangerous. It means it is powerful in a very specific way. AI is strong at execution once direction is set. It is far less helpful at deciding what direction should be.

That responsibility still sits with people. It depends on understanding how the business makes money, where constraints actually lie, and which tradeoffs matter in the current context. That is business acumen. It is not replaced by AI. It becomes more visible because of it.

When AI answers the wrong question perfectly, the problem is not the answer. It is the question. Fixing that problem still requires something AI does not provide: judgment.