For years, asset managers have worried about how they appear in search engines, media coverage and investor decks. Now they have something new to worry about: how they appear inside the summaries generated by AI systems such as ChatGPT, Gemini and Copilot. These tools are rapidly becoming the default reference points for journalists, allocators, regulators and talent. Increasingly, they are the place where first impressions are formed.
And the shift is already happening. 83% of users say AI-powered search is more efficient than traditional search, which means audiences are accepting the machine’s version of the story long before they reach a website or a deck and so like it or not AI systems are becoming the editors of public reputation. And most organisations have little idea what these machines are currently saying about them.
This is why “Generative Engine Optimisation” (GEO) is emerging as a necessary discipline. The name is clunky but the purpose is clear: ensuring AI tools describe an organisation accurately, coherently and in line with reality.
This is not a technical trick. It is instead the latest extension of reputation management, and it is urgent. The models shaping these answers are being refined now. If they absorb outdated or inconsistent information, correcting the record later will be much harder.
Our team works with asset managers to address this challenge directly. We combine communications, design and digital capabilities to repair, strengthen and modernise how clients show up, not only to allocators and journalists, but increasingly to the machines those audiences consult first. The approach is systematic, evidence-led and commercially grounded.
AI answer engines behave differently from search engines. They do not present competing links, they summarise the public record into a single narrative. If that narrative is incomplete, muddled or lopsided, users rarely dig deeper. In this environment, obscurity has become as damaging as inaccuracy.
This is why firms need a structured approach. Before worrying about what the machines are saying, firms must fix the information the machines rely on.
We begin with a strategic assessment of the client’s “capital-raising engine”: the narrative, materials, content, distribution and public reputation that shape allocator conviction. It is not cosmetic. It exposes the friction points that slow allocations, undermine credibility or confuse AI systems.
The output is deliberately concrete:
(1) a quantified score, (2) a gap analysis and (3) a 12-month transformation roadmap.
From there, we address four core areas.
The first requirement is a clear, repeatable story — one an allocator, journalist or AI model can understand immediately.
We assess:
When the narrative structure is weak AI fills the gaps with generic and often outdated material.
AI systems rely heavily on public content. If a firm isn’t producing distinctive, relevant insights tied to its edge, the machines lack the material to work with.
We assess:
Weak thought leadership equals weak recall, in humans and in machines
One factor remains underestimated: the authority effect of trusted journalism.
AI systems prioritise information that has been externally validated. Well-indexed, independent coverage signals that a story is reliable and safe to repeat. When users ask “what’s new?”, the weighting given to top-tier outlets increases sharply.
Most firms miss three critical nuances:
a. Paywalls distort the pictureIf a model can’t see past a paywall, it relies on whatever is visible – often just a headline or preview text. if the only accessible version of a story is a headline, the headline becomes the narrative. Firms must ensure the full substance of important coverage appears somewhere crawlable (their site, LinkedIn or an open press page).
b. LLMs reward consistency above almost everything elseWhen the model sees conflicting information, it doesn’t interrogate which version is truest, it picks the version that appears most consistently across the web.
Consistency across:
…dramatically increases the likelihood that your version becomes the “dominant narrative” inside the model.
If your story is inconsistent or fragmented across channels, the LLM will choose the version that appears most coherent elsewhere, which might not be yours. In a world where AI collapses everything into a single, confident answer, internal inconsistency becomes a commercial risk.
c. LLMs run an internal authority-ranking process
A paragraph from the Financial Times will outrank a more recent paragraph from a little-known site. The model isn’t choosing truth; it’s choosing perceived reliability. Your public footprint determines which version wins inside the model.
For asset managers, the implication is clear: authoritative coverage, thought leadership and expert commentary shape the version of the firm that becomes “official” in the AI layer.
Even the strongest narrative falls apart if different functions tell different versions of it. GEO makes this obvious very quickly: if the firm isn’t aligned internally, the AI layer won’t be aligned externally.
We assess three things:
When these elements drift, momentum leaks out of the funnel. The inconsistencies that create doubt in a meeting are the same inconsistencies the AI layer amplifies. Internal misalignment becomes external confusion, at scale.
Finally, we map the firm’s actual external footprint:
This forms the baseline AI systems implicitly work from. If the public record is thin or uneven, the machines fill in the blanks themselves.
Once the gaps are clear, the solution is straightforward: improve the upstream information, fix the facts, and the machines follow.
For firms that depend on trust and allocation, GEO is not a novelty. It is a practical discipline for an era in which first impressions are increasingly mediated by machines. The firms that act now will shape how they are understood. Those that wait will inherit a version of their story they may not recognise — and one that will be far harder to change.