ChatGPT Tried to Tell Me What WIRED Recommends. It Got Everything Wrong.
When AI plays tech reviewer, nobody wins
Here is a fun experiment: ask ChatGPT what WIRED's reviewers recommend as the best TV, headphones, or laptop. Go on, try it. You will get a confident, well-formatted, utterly wrong answer. It is the digital equivalent of asking someone who has never watched football to name the best Premier League side and getting a passionate case for Accrington Stanley.
That is essentially what WIRED journalist Reece Rogers discovered when he put OpenAI's chatbot through its paces. The results were not just slightly off. They were spectacularly, almost impressively inaccurate.
The great TV debacle
When Rogers asked ChatGPT for WIRED's top TV pick, the chatbot reportedly served up the LG QNED Evo Mini-LED as the best overall choice. Sounds plausible enough, right? There is just one tiny problem: that is not what WIRED recommends. When pressed, ChatGPT apparently conceded the actual top pick was the TCL QM6K, which it had quietly swapped out in favour of the LG. Why? Your guess is as good as mine, but "making things up with supreme confidence" does seem to be generative AI's signature move.
AirPods that nobody had actually reviewed
It gets better. ChatGPT also reportedly listed the AirPods Max 2 as WIRED's headphone pick. Apple announced the AirPods Max 2 on 16 March 2026, with availability from 25 March. At the time of writing, it is entirely plausible that WIRED's headphone reviewer Ryan Waniata had not yet had the chance to properly test and add them to the buying guide. Minor detail, that. ChatGPT apparently did not let the absence of an actual review get in the way of a good recommendation.
This is a bit like a restaurant critic endorsing a dish they have never tasted. Sure, the menu description sounds lovely, but that is not quite how reviews work, is it?
The laptop that time forgot
Perhaps the most telling blunder involved laptops. ChatGPT reportedly kept insisting that WIRED's top pick was the MacBook Air M4 from 2025. The MacBook Air M5 was announced on 3 March 2026 and went on sale from 11 March. By the time this conversation was happening, the M5 had been out for weeks. ChatGPT was confidently recommending last year's model as the current favourite, which is a bit like recommending Windows 10 when Windows 11 has been out for ages.
The numbers paint a grim picture
Lest you think this is an isolated incident, the data suggests otherwise. OpenAI itself acknowledged that up to 63% of product mentions in ChatGPT search results contained inaccuracies. Let that sink in. Nearly two-thirds of the products it mentions come with errors. You would get better odds flipping a coin.
Even with OpenAI's revamped shopping features and a specialised shopping model, the accuracy on complex queries only hits around 52%, compared to 37% for standard ChatGPT Search. An improvement? Technically, yes. Reassuring? Absolutely not. Clearing a bar that low is not exactly cause for celebration.
Meanwhile, a broader survey found that 64% of consumers have encountered AI-generated misinformation about products or services in the past six months. A Washington State University study from March 2026 gave AI a 'D' grade for accuracy and consistency. If this were a school report, the parents would be called in.
The delicious irony of the Conde Nast deal
Here is where things get properly absurd. Conde Nast, WIRED's parent company, signed a multi-year licensing deal with OpenAI back in August 2024. The agreement covers WIRED, GQ, Vogue, and other titles, allowing their content to appear in ChatGPT responses with proper links.
So Conde Nast is paying to have its content surface in ChatGPT, and ChatGPT is still getting the recommendations wrong. It is like hiring a personal assistant who has access to all your files but insists on making things up anyway. You have given them the answers, and they are still winging it.
The affiliate revenue problem nobody talks about enough
Beyond accuracy, there is a more insidious issue at play. When ChatGPT presents product recommendations supposedly based on WIRED's reviews, those listings do not include the publisher's affiliate links. This matters enormously.
Affiliate revenue is a lifeline for tech journalism. When you click through from a review to buy a product, the publication earns a small commission that helps fund the very testing and editorial work you relied on. ChatGPT neatly bypasses this entire system. It takes the credibility of expert reviews, strips out the commercial mechanism that funds them, and often gets the actual recommendations wrong for good measure. It is a triple whammy.
More people are turning to AI chatbots as part of their shopping journey, with ChatGPT accounting for the vast majority of AI-driven shopping traffic. Every query answered by ChatGPT is potentially a visit that never reaches the publisher's website. The traffic diversion might still be small in absolute terms, but the trajectory is clear, and it should concern anyone who values independent product journalism.
OpenAI's awkward pivot
This all sits within a broader context of OpenAI trying, and largely struggling, to crack e-commerce. The company launched an 'Instant Checkout' feature in September 2025, which has since been scaled back due to low conversion rates and, you guessed it, accuracy issues. OpenAI is now repositioning ChatGPT as more of a product discovery and research tool rather than a direct purchasing platform.
The latest version, GPT-5.4, claims a 33% reduction in hallucination rates compared to GPT-5.2. Progress, certainly. But when your starting point involves getting things wrong more often than not, a 33% improvement still leaves you in decidedly unreliable territory.
What this actually means for you
If you are using ChatGPT to help with purchase decisions, treat its recommendations with the same scepticism you would give a stranger's opinion at the bus stop. They might be right. They might be confidently wrong. You simply cannot tell without checking.
The smarter approach? Use AI as a starting point if you must, but always verify against the actual source. If ChatGPT says WIRED recommends something, go to WIRED and check. If it cites a specific review, find that review. The ten seconds of extra effort could save you from buying a product that no expert actually recommended.
The bigger picture
This is not really a story about one chatbot getting a few product picks wrong. It is about a fundamental tension in how AI interacts with expert knowledge. These systems hoover up authoritative content, repackage it with varying degrees of accuracy, and present it as reliable guidance, all while undermining the economic model that produces that content in the first place.
Until AI can consistently get the basics right, perhaps the most honest thing ChatGPT could say when asked for product recommendations is: "I am not entirely sure. Here is a link to people who actually tested these products." But that, of course, would require the kind of self-awareness that remains firmly in the realm of science fiction.
Read the original article at source.

No comments yet. Be the first to share your thoughts.