)
)
Case
Q&A session – the director’s cut
AI is shaking up the world of SEO — and things are changing fast. That was the message loud and clear during our 19 June webinar, “The Future of SEO – How brands in the Benelux are successfully leveraging AI Search”. Our speakers, Clarissa Filius, SEO Lead Netherlands at iO, and Jens Michiels, SEO lead in Belgium, shared how AI-driven search tools — think ChatGPT, Perplexity and Google SGE — are rewriting the rules for search marketing.
They covered everything from Generative Engine Optimisation (GEO) to the nuances of local vs conversational search, demonstrating that traditional SEO tactics alone won’t cut it anymore. The key insight? To stay visible, brands need to rethink keyword strategies and embrace AI-first content approaches.
Want to know how future-proof your current SEO strategy is and how you can optimise it? Start your 360° SEO transformation today.
All your questions answered
It came as no surprise that the topic sparked a wave of interesting hands-on questions from the audience. Unfortunately we didn’t have the time to to answer them all during the Q&A session after the webinar. But we feel they deserve more than just a passing mention, so we’ve taken the time to answer them in full. Whether you watched live or just want the lowdown on how to make AI search work for your brand, keep reading. We promise it will be worth your while.
Q&A lowdown
Is it true that AI crawlers can't actually read PDFs, and is this why we should steer clear of JavaScript and PDFs?
That’s right. Most AI crawlers work differently, and virtually all of them struggle to access PDF content through web searches or URL inputs. Tools like ChatGPT can only read PDFs if you upload them directly. So if you've got important content buried in a PDF on your website, the AI simply won't see it. Same issue with JavaScript-generated content – the crawler can't render it, so it's invisible to AI systems.
What's the deal with social media platforms like YouTube and TikTok when it comes to GEO optimisation? Do AI systems actually trust them as credible sources?
Social media content will likely get pulled into Google's AI mode, just like it already appears in regular Google searches with video snippets. But for GEO specifically, we reckon the impact will be fairly limited. LinkedIn does pop up as a source occasionally, but strong social media performance won't necessarily get your brand featured in AI search results. That said, we see social search as part of a holistic 360° SEO strategy – so YouTube, TikTok, Pinterest, etc. If your audience is using these platforms, you should be there too.
Got any practical tips for structuring content that plays nicely with knowledge graphs?
Yes: focus on clear headings, smart internal linking, structured data, and consistent naming conventions. Build topic clusters and create glossary pages to connect related content. Keep everything modular with FAQs and clear definitions. AI often pulls specific chunks of content, so make your key info concise and easy to extract. Strong E-E-A-T signals will boost your credibility and visibility too.
What's the difference between AI bots that crawl content and AI models that use that content for training purposes?
They serve completely different roles. Crawlers (like those from search engines or data aggregators) scan websites to collect and index information for later retrieval – think search results. They don't actually "learn" from content in the machine learning sense. Training models, on the other hand, analyse massive datasets to learn patterns and relationships. This data becomes part of their internal statistical understanding, not as stored documents but as learned structures. So, in short: crawlers are all about content retrieval; training models use content to generalise and generate responses. Plus, crawlers update continuously whilst training happens periodically.
Are sitemaps still getting crawled by AI systems?
AI systems themselves don't use your sitemap directly. But don't bin it just yet – it's still crucial for getting your pages indexed. AI search tools rely on traditional web crawlers like Google and Bing as their foundation, and those search engines absolutely still need sitemaps to properly index your site
From a hosting perspective, how should we handle all this extra bot traffic? Some of our clients with loads of URLs (particularly on Magento) are seeing site crashes.
It’s a good idea to focus on crawl control and reducing server load. Use robots.txt to block low-value URLs, implement aggressive caching (Redis or Varnish), and get a CDN to offload traffic. Monitor your logs to spot and throttle non-essential bots using .htaccess or firewalls. Set up rate limiting to prevent server overload. For Magento specifically, optimise your indexing and caching to reduce dynamic load. Handle 404s efficiently with lightweight error pages and redirect common broken URLs. You can block AI bots like GPTBot or CCBot in robots.txt, but avoid doing this by default – you might prevent your content from appearing in AI-powered tools.
Do large language models (LLMs) consider traditional search engine rankings, or do they work out their own rankings independently?
LLMs don't use traditional rankings directly, but they can tap into search engines like Google or Bing during retrieval (think ChatGPT with browsing). This means good rankings can boost your visibility and increase citation chances, especially with real-time search involved. Research shows high-ranking pages are more likely to get used, but interestingly, AI models cite pages outside the top 20 results more often than regular users do. This is probably because AI systems scan a broader range of sources for diversity and relevance beyond just ranking position.
Which curated lists do LLMs actually reference?
During LLM training, there are likely curated lists of well-known, trustworthy brands and websites. Think of it this way: when someone asks about fast food burgers, the system has lists containing brands like McDonald's, Burger King, KFC, etc. to pull decent answers from.
Is structured data (like Schema.org or GraphQL properties) still a best practice for LLM relevance?
Absolutely. Using structured data like Schema.org markup (JSON-LD) or explicit GraphQL properties remains best practice for LLM relevance. Modern models and retrieval systems are getting better at leveraging structured data as a clear knowledge layer, which improves accuracy, reduces hallucinations, and supports more precise reasoning. And that’s what we should be aiming for.
How do LLMs decide which websites to include in their results? Are there preferred domains?
In fact, it's a combination of methods. Main factors include training data (where they already have curated lists of trustworthy brands) and live search (currently using Google and Bing APIs), where they prefer authoritative domains with strong E-E-A-T signals and up-to-date information.
Any insights on how LLMs are being used in the United States?
AI Search rolled out earlier in the US than in the EU, so there are already more GEO use cases and learnings available. We're seeing tools like query fan-out simulation and LLM keyword research emerging. There's also a reported 34.5% drop in click-through rates, showing how AI-generated answers are starting to change user behaviour with search results.
Do you have any specific strategies for the non-profit sector regarding AI search and GEO?
AI search and GEO can be very rewarding for non-profits – helping you appear in informational queries and providing reliable answers when users are seeking support, information, or deciding which cause to fund. It's crucial to monitor branded queries to understand how AI tools represent your organisation. We work with loads of non-profits and can share results and use cases in the coming months – feel free to reach out if you need help with that.
Will there be an overview of best-practice tools? There seem to be loads with overlapping features.
Here are the tools that we're aware of: Profound, Rankshift, Similarweb, Xfunnel, Semrush, and Otterly. No best-practice tools exist yet – they're all rapidly evolving and AI tracking features are still fairly new. Profound is the first to include AI search volumes, though it would be ideal if Google and OpenAI shared search volume data themselves. At iO, we create custom trackers and dashboards for AI visibility monitoring, making our clients less dependent on external (and expensive) tools.
Are there differences in predicted search and click volumes across different stages of the customer journey?
Definitely. We're already seeing changes in the awareness and consideration phases. Users are asking LLMs things they'd normally research themselves. (This area is still developing.)
We provide medical content requiring precision and context. We're worried AI might reinterpret or misrepresent this. How can we ensure accuracy?
You'll notice AI-generated answers often include disclaimers like "ChatGPT can make mistakes" or medical advice warnings. LLM answers typically combine multiple sources, so if your website gets used as a source, it means you're considered valuable and worth clicking through to. The fact you're being referenced is actually a positive signal.
What's your take on Google's Project Mariner – the AI feature that could do research, comparisons, and potentially even purchases? How might this affect the customer journey and marketing optimisation?
I expect this to take a while. We're only just starting to use AI in search, and before AI agents like Project Mariner can reliably handle comparisons or payments, there's loads of complex technical work needed. I don't see this becoming standard anytime soon. If it does get adopted, the main impact will be technical – businesses will need to make their websites AI-agent accessible for form interactions and accurate actions.
If AI provides direct answers without citing sources, why should we bother investing in informative SEO/GEO content? Aren't we just giving value away for free?
Good point. This is exactly why you need to be selective about which crawlers can visit your site. Most Generative Engines/LLMs use Google and Bing to discover content, so it's tricky to block all the ‘problematic’ platforms. But the popular ones like ChatGPT, Perplexity, and Gemini do show their sources, which can result in brand awareness and incoming traffic.
As a business with physical stores relying heavily on local SEO, any ideas for improving LLM visibility for queries like "best bedding stores in Amsterdam"?
When your business depends on local SEO, ensuring AI search tool visibility is crucial as search behaviour evolves. AI tools like ChatGPT and Google's AI Mode can show local results, business profiles, and mention brands in lists or recommendations. You want your business and location included in those results. Many traditional local SEO tactics still apply – strong reviews, consistent citations, and mentions in trusted local websites and content. At iO, we include local SEO as part of our 360° SEO approach for visibility across both traditional and AI-driven search.
Any forecasts on how Google Ads or other paid search models will evolve with these AI changes?
We see that Google is really experimenting with result positioning. We've seen SERPs starting with AI-generated answers, then organic results, then Google ads. We're monitoring this closely because Google clearly hasn't decided how they'll handle their SERP structure yet. We’re keeping our eyes peeled…
There's loads of buzz about "hacks" like repeatedly mentioning your company to ChatGPT or paying for listicle inclusion. Do these actually work?
No, these so-called "hacks" aren't reliable strategies. Repeatedly mentioning your brand in prompts has zero effect. Models like ChatGPT or Gemini aren't trained on user inputs. Retraining happens infrequently, is costly, and uses curated datasets from multiple trusted sources. Paying for listicle inclusion can sometimes help, but only if those listicles appear on authoritative websites that AI references. Many AI systems use retrieval-augmented generation (RAG), pulling real-time information from external sources. We recommend focusing on digital PR and placing high-quality content on respected domains – this is more likely to get picked up by AI systems and deliver consistent results.
Can you share an example of a client successfully transitioning from traditional SEO to AI search strategies?
We're still early in testing AI search strategies, and this isn't a shift away from traditional SEO. Instead, we're adding GEO on top of existing SEO efforts. For clients already performing well in traditional SEO, we've seen improved visibility in AI Overviews. We also apply content and technical optimisation strategies specifically for GEO visibility – structuring content for clarity, enhancing topical authority, and improving page performance. While we can't share specific client data here, we've seen measurable results across multiple projects. Feel free to reach out for more details or to discuss a few cases.
Does the shift to AI-driven search have a linear impact on sales performance?
Not really, the shift doesn't show a clearly linear impact on sales performance. Early indicators suggest more impressions, fewer clicks, and potentially stable or improved conversions, especially when recommended by AI. This might reflect more qualified, pre-nurtured traffic. However, the relationship is still evolving, and we don't fully understand all the dynamics yet. Effects likely vary by market, and longer-term outcomes remain a bit uncertain.
We noticed web traffic from US users dropping earlier than other regions. Could this be due to earlier LLM-based search rollout in the US?
Yes, exactly. The earlier US traffic drop is due to Google's AI Overviews launching in the US in May 2024. These features change user behaviour by answering queries directly in search results, reducing click-through needs. Google expanded AI Overviews to the UK, India, and Brazil in August 2024, then to over 100 countries by October 2024. Due to stricter regulations, EU rollout only began in March 2025 and was fully live by May 2025. This timeline explains why traffic patterns shifted earlier in the US.