Since the turn of the Millennium, marketers have mastered the science of search engine optimization.
We learned the “rules” of ranking, the art of the backlink, and the rhythm of the algorithm. But, the ground has shifted to generative engine optimization (GEO).
The era of the 10 blue links is giving way to the age of the single, synthesized answer, delivered by large language models (LLMs) that act as conversational partners.
The new challenge isn’t about ranking; it’s about reasoning. How do we ensure our brand is not just mentioned, but accurately understood and favorably represented by the ghost in the machine?
This question has ignited a new arms race, spawning a diverse ecosystem of tools built on different philosophies. Even the words to describe these tools are part of the battle: “GEO“, “GSE”, “AIO“, “AISEO”, just more “SEO.” The list of abbreviations continues to grow.
But, behind the tools, different philosophies and approaches are emerging. Understanding these philosophies is the first step toward moving from a reactive monitoring posture to a proactive strategy of influence.
School Of Thought 1: The Evolution Of Eavesdropping – Prompt-Based Visibility Monitoring
The most intuitive approach for many SEO professionals is an evolution of what we already know: tracking.
This category of tools essentially “eavesdrops” on LLMs by systematically testing them with a high volume of prompts to see what they say.
This school has three main branches:
The Vibe Coders
It is not hard, these days, to create a program that simply runs a prompt for you and stores the answer. There are myriad weekend keyboard warriors with offerings.
For some, this may be all you need, but the concern would be that these tools do not have a defensible offering. If everyone can do it, how do you stop everyone from building their own?
The VC Funded Mention Trackers
Tools like Peec.ai, TryProfound, and many more focus on measuring a brand’s “share of voice” within AI conversations.
They track how often a brand is cited in response to specific queries, often providing a percentage-based visibility score against competitors.
TryProfound adds another layer by analyzing hundreds of millions of user-AI interactions, attempting to map the questions people are asking, not just the answers they receive.
This approach provides valuable data on brand awareness and presence in real-world use cases.
The Incumbents’ Pivot
The major players in SEO – Semrush, Ahrefs, seoClarity, Conductor – are rapidly augmenting their existing platforms. They are integrating AI tracking into their familiar, keyword-centric dashboards.
With features like Ahrefs’ Brand Radar or Semrush’s AI Toolkit, they allow marketers to track their brand’s visibility or mentions for their target keywords, but now within environments like Google’s AI Overviews, ChatGPT, or Perplexity.
This is a logical and powerful extension of their current offerings, allowing teams to manage SEO and what many are calling generative engine optimization (GEO) from a single hub.
The core value here is observational. It answers the question, “Are we being talked about?” However, it’s less effective at answering “Why?” or “How do we change the conversation?”.
I have also done some maths on how many queries a database might need to be able to have enough prompt volume to be statistically useful and (with the help of Claude) came up with a database requirement of 1-5 billion prompt responses.
This, if achievable, will certainly have cost implications that are already reflected in the offerings.
School Of Thought 2: Shaping The Digital Soul – Foundational Knowledge Analysis
A more radical approach posits that tracking outputs is like trying to predict the weather by looking out the window. To truly have an effect, you must understand the underlying atmospheric systems.
This philosophy isn’t concerned with the output of any single prompt, but with the LLM’s foundational, internal “knowledge” about a brand and its relationship to the wider world.
GEO tools in this category, most notably Waikay.io and, increasingly, Conductor, operate on this deeper level. They work to map the LLM’s understanding of entities and concepts.
As an expert in Waikay’s methodology, I can detail the process, which provides the “clear bridge” from analysis to action:
1. It Starts With A Topic, Not A Keyword
The analysis begins with a broad business concept, such as “Cloud storage for enterprise” or “Sustainable luxury travel.”
2. Mapping The Knowledge Graph
Waikay uses its own proprietary Knowledge Graph and Named Entity Recognition (NER) algorithms to first understand the universe of entities related to that topic.
What are the key features, competing brands, influential people, and core concepts that define this space?
3. Auditing The LLM’s Brain
Using controlled API calls, it then queries the LLM to discover not just what it says, but what it knows.
Does the LLM associate your brand with the most important features of that topic? Does it understand your position relative to competitors? Does it harbor factual inaccuracies or confuse your brand with another?
4. Generating An Action Plan
The output isn’t a dashboard of mentions; it’s a strategic roadmap.
For example, the analysis might reveal: “The LLM understands our competitor’s brand is for ‘enterprise clients,’ but sees our brand as ‘for small business,’ which is incorrect.”
The “clear bridge” is the resulting strategy: to develop and promote content (press releases, technical documentation, case studies) that explicitly and authoritatively forges the entity association between your brand and “enterprise clients.”
This approach aims to permanently upgrade the LLM’s core knowledge, making positive and accurate brand representation a natural outcome across a near-infinite number of future prompts, rather than just the ones being tracked.
The Intellectual Divide: Nuances And Necessary Critiques
A non-biased view requires acknowledging the trade-offs. Neither approach is a silver bullet.
The Prompt-Based method, for all its data, is inherently reactive. It can feel like playing a game of “whack-a-mole,” where you’re constantly chasing the outputs of a system whose internal logic remains a mystery.
The sheer scale of possible prompts means you can never truly have a complete picture.
Conversely, the Foundational approach is not without its own valid critiques:
- The Black Box Problem: Where proprietary data is not public, the accuracy and methodology are not easily open to third-party scrutiny. Clients must trust that the tool’s definition of a topic’s entity-space is correct and comprehensive.
- The “Clean Room” Conundrum: This approach primarily uses APIs for its analysis. This has the significant advantage of removing the personalization biases that a logged-in user experiences, providing a look at the LLM’s “base” knowledge. However, it can also be a weakness. It may lose focus on the specific context of a target audience, whose conversational history and user data can and do lead to different, highly personalized AI outputs.
Conclusion: The Journey From Monitoring To Mastery
The emergence of these generative engine optimization tools signals a critical maturation in our industry.
We are moving beyond the simple question of “Did the AI mention us?” to the far more sophisticated and strategic question of “Does the AI understand us?”
Choosing a tool is less important than understanding the philosophy you’re buying into.
A reactive, monitoring strategy may be sufficient for some, but a proactive strategy of shaping the LLM’s core knowledge is where the durable competitive advantage will be forged.
The ultimate goal is not merely to track your brand’s reflection in the AI’s output, but to become an indispensable part of the AI’s digital soul.
More Resources:
Featured Image: Rawpixel.com/Shutterstock