Artificial intelligence is shaping how organisations are regarded. Despite its patchy access to primary sources, the opaque way it sifts information and tendency to fill gaps ‘creatively’, AI is quietly becoming a big influence on how brands are regarded. We spoke to people who know. Andrew Griffith MP, the UK’s shadow Business & Trade minister, Roa Powell from the IPPR, Matt Rogerson, the Financial Times’ head of public policy, and economist Roger Bootle. They tore into the uncomfortable reality that companies and brands are now being defined by systems they neither control nor fully understand.
But there is no going back. More than a quarter of adults now use generative AI every week to seek information, and the figure climbs to 40% among younger adults. Every one of those interactions is a moment where an AI model makes silent judgements about a brand based on sources that are often narrow, outdated, unexpected or simply wrong. Large Language Models (LLMs as they are known) rely heavily on licensed news (where it has a commercial agreement to access information), corporate websites and public platforms like Wikipedia and Reddit.
When AI gets you wrong
But they also surface older, unlicensed or obscure material with surprising ease. One example struck a nerve: when asked about Marks & Spencer, ChatGPT cited a 17‑year‑old Guardian article alongside reporting from the Scottish Sun. Out‑of‑date reporting quietly becomes authoritative fact if it sits in the right corner of the internet. So that means page 196 of a company’s annual report from years ago is found as easily as the Chairman’s recent letter on page 3.
Sources shaping reputations
This opacity makes AI-mediated reputation risky. The way LLMs work is they seek the next most statistically likely word – accuracy is not a prime objective. If a model draws on old content or an inaccurately edited Wikipedia page, it can present a false version of a company with total confidence. As Anna Fishlock, H/Advisors Head of Digital, noted, “this then becomes the first impression formed by people who matter: journalists on deadline, investors scanning a sector, candidates deciding whether to apply for a job.” And because the systems don’t cite everything clearly, brands may never know which sources are shaping perceptions.
From distortion to invention
The risks are not theoretical. The FT’s Matt Rogerson cited the example of an LLM patching together information from several sources – and itself – to generate a share BUY recommendation purporting to come from the Investors Chronicle. The Investors Chronicle never did anything of the sort, but it looked plausible to an uncritical reader. Rogerson frequently sees investment views attributed to real FT journalists, incorrectly melding information and events from published commentary. And LLMs can amplify deepfake scams featuring well‑known columnists.
The regulatory gap
In every case the reputational damage flows not to the AI companies but to the individuals and brands whose names are borrowed. “Whether you like it or not,” Matt Rogerson stated, “you’re in the model.”
When conversation turned to regulation, the immediate verdict is bleak. Ex City Minister Andrew Griffith drew comparisons with social media, where lawmakers are still wrestling with issues two decades after the platforms emerged. Given the pace of AI development, he argued, regulators will not be able to offer meaningful day‑to‑day protection for organisations any time soon. The risks will have to be managed by institutions themselves, not waited out in the hope of government intervention. In reality government intervention will happen, eventually, but only after big AI-fueled crises have impacted people.
Dwindling information ecosystem
The fragility of the information ecosystem adds further complication. Many news organisations, including the BBC, are now blocking AI scrapers because of unresolved questions around payment. As the IPPR’s Roa Powell pointed out, neither CoPilot nor Google Gemini draw from the BBC (the Guardian is ChatGPT’s favourite source, by a huge margin). As those walls go up, models rely on fewer sources, often defaulting to whichever outlets have signed licensing deals, or are available to scrape. This creates an imbalance in which a handful of publishers become disproportionately influential, while the absence of others leaves AI models blind to whole areas of quality journalism. This vacuum risks being exploited by propagandists who seed narratives designed to appear in AI outputs.
What organisations can do about it
Organisations must understand how they appear in AI systems today. Small inaccuracies, especially on high‑visibility pages like Wikipedia, can snowball into systemic misrepresentation. Keeping corporate websites updated, investing in credible media coverage and ensuring accurate, fresh information is available in structured formats all increase the likelihood that AI tools will surface the right material.
Everyone from bosses to marketing teams must understand how AI sources information, how it distorts it, and how those distortions can be spotted and corrected. As Roger Bootle remarked, the right response is not to try and predict where AI will land, but to develop the capability to monitor, interrogate and adapt to it, to “invest in radars.” It was Bootle who raised a note of optimism about AI: “It is imperfect and will disrupt plenty of areas of employment. But like all other technological advances, it will also create new areas of activity and wealth generation. After all, when the spreadsheet arrived some thought that was the knell for accountants. But in the years after its adoption, the number of accountants in the US surged.”
Andrew Griffith reminded us of the prophets of gloom who believed broadcast news would spiral out of control when ITV was launched in 1955, competing with the hallowed BBC. “In fact, the system adjusted and soon found a way to work, and work well.”
Organisations that invest in understanding how they appear in AI systems today will be better placed than those who wait. Regulation will come eventually, but those who fare best will not be the ones who waited for it.