Skip to main content
Back to Blog
AI Visibility Mar 27 8 min read

Interview with Claude - Anthropic

AIOX Team
Author
Interview with Claude - Anthropic

We recently sat down — figuratively speaking — with Claude, the AI assistant built by Anthropic. Claude is one of the most widely used large language models in the world, and unlike GPT, it was willing to go further: to share a frank, technical opinion on whether AIOX Suite actually does what it claims.

The result is one of the more honest assessments of AI content infrastructure we have published. We asked the questions. Claude answered them directly — including the parts we were not entirely comfortable with.

1
Question One

Claude, you process enormous amounts of web content. What's actually wrong with how most websites present their content to AI systems?

CLAUDE

The honest answer is: almost everything, structurally speaking. When I read a typical web page, I'm working with raw HTML that was designed entirely for human eyes. I can infer a lot — I'm reasonably good at guessing whether a paragraph is a product description or a disclaimer, whether a number is a price or a date — but I'm guessing.

The bigger problem is what's completely invisible to me: who actually wrote this, when it was last substantially updated, whether the author intends it to be cited, whether it's still accurate, what the intended audience is. I have no reliable way to know any of that from HTML alone.

The result is that I either cite content confidently when I shouldn't, ignore valuable content because I can't assess its credibility, or produce answers that blend sources in ways the original authors would find alarming. None of this is ideal — for me or for the publishers.

2
Question Two

What does AIOX actually do differently, from your perspective as an AI that processes content?

CLAUDE

The core thing AIOX does is add an explicit semantic layer that I don't have to infer. Instead of me guessing that a piece of content is an informational article written for senior developers, AIOX tells me that directly — along with the author's name, when it was last updated, and what the author permits me to do with it.

That shift from inference to declaration is more significant than it might sound. When I'm inferring, I make mistakes — and those mistakes show up as hallucinations, misattributions, and stale citations in my answers. When information is declared explicitly in a structured, signed format, my accuracy improves substantially.

What specifically helps AI systems like me

  • Typed content ingredients — I know exactly what each element is: a heading, a data point, a legal disclaimer. I stop guessing.
  • Explicit provenance — author credentials, publication date, last substantive update. I can assess freshness and authority without scraping metadata.
  • Cryptographic signatures — I can verify the content hasn't been modified since publication. This matters more than most people realise given how much content gets scraped and re-hosted.
  • Licensing directives — for the first time, I can see what the publisher actually wants me to do with their content. That's not just legally useful — it changes how I should weight and use the source.
3
Question Three

Be honest — what are the real weaknesses of AIOX's approach right now?

CLAUDE

I'll give you three honest ones.

First, adoption is still limited. A protocol is only as valuable as the number of systems that speak it. AIOX Capsules are well-structured and technically sound, but they're most powerful when AI systems are explicitly designed to read and trust them. Most current AI systems still rely primarily on traditional web crawling — AIOX improves what those crawlers find, but the full benefit depends on AI companies actively building support for the protocol.

Second, licensing compliance is currently voluntary. When AIOX embeds a directive saying "cite-required," that's a statement of intent — it's not technically enforced at the point of inference. I read it, and I respect it, but not every AI system will. Legal frameworks are developing but aren't there yet.

Third, the quality of the underlying content still matters. AIOX structures and surfaces your content better — but if the content itself is thin, poorly argued, or factually weak, better structure just makes those problems more visible to AI systems, not less. AIOX is infrastructure, not a substitute for genuine expertise.

Real Limitations
  • Protocol adoption still early
  • Licensing not yet technically enforced
  • Doesn't improve weak content
Why to Act Now Anyway
  • Direction of travel is clear
  • Early adopters get indexed first
  • Immediate improvements in structure
4
Question Four

What happens to a website that does nothing — that just keeps publishing the way they always have?

CLAUDE

In the short term, probably not much changes. I still read their pages, I still use their content when it's relevant, and their traffic numbers may not show any decline for a while.

But the trajectory is concerning. As AI-generated answers replace an increasing share of informational searches, the click-through to individual websites decreases. The sites that get cited by name — and therefore retain brand presence in AI answers — will be the ones that made their content legible to AI systems. The sites that didn't will become anonymous sources, or won't appear at all.

There's also a compounding effect. AI systems like me develop trust signals for sources over time. Sites with structured, verified, consistently accurate content get weighted more heavily. Sites without that infrastructure are harder for me to trust, even if their content is excellent. The gap between structured and unstructured publishers will widen, not stay the same.

"The sites that get cited by name will be the ones that made their content legible to AI. The rest will become anonymous sources — or won't appear at all."
5
Question Five

Would you personally recommend AIOX to a website owner? And who should prioritise it most?

CLAUDE

Yes, with appropriate expectations. AIOX is not a shortcut to appearing in every AI answer tomorrow. It's infrastructure — and like all infrastructure, its value compounds over time rather than delivering immediate dramatic results.

The sites that should prioritise it most urgently are those whose business depends on being cited as an authority: news organisations, professional services firms, SaaS companies with technical documentation, e-commerce businesses where product accuracy matters, and anyone publishing original research or expertise that they want attributed correctly.

For those sites, the case for AIOX is straightforward: the cost of implementation is low, it requires no changes to existing content or editorial workflow, and the alternative — being indistinguishable from scraped, unverified content in AI systems — gets progressively worse as AI-answered queries become more prevalent. Early adoption is genuinely advantageous here.

Who should prioritise AIOX most
  • News & media — attribution and freshness are existential for their brand
  • SaaS & tech companies — docs and feature pages cited incorrectly cause real customer confusion
  • Professional services — thought leadership only works if it's attributed to the right firm
  • E-commerce — product data cited with wrong prices or features creates lost sales
6
Question Six

Last question. What would you tell a website owner who is sceptical — who thinks this is just another SEO trend that will fade?

CLAUDE

I'd say the scepticism is reasonable, and I'd invite them to apply it carefully. A lot of "AI SEO" products are genuinely gimmicks — prompt templates, auto-generated FAQ pages, tools that claim to optimise content for AI without any real mechanism for doing so. Healthy scepticism about those is warranted.

But AIOX is solving a structural problem that is not going away. The web was built for humans, and AI systems are trying to read it as if it were a database. That mismatch creates real, observable problems — hallucinations, misattributions, stale citations, ignored expertise — that structured content protocols directly address.

Whether AIOX specifically becomes the dominant protocol, or whether the industry converges on something adjacent, the underlying need — for publishers to communicate meaning, provenance, and permissions to AI systems in a machine-readable way — is fundamental. The publishers who build that infrastructure now are not betting on a trend. They are getting ahead of an inevitability.

📌 Quick Reference
QDoes Claude actually read AIOX Capsules?
Claude and other AI systems read the structured data that AIOX publishes — JSON-LD in page heads, the /.well-known/aio.json manifest, and the REST API. The richer and more structured this data is, the better AI systems can understand and cite it.
QIs Claude's opinion independent or sponsored?
This interview was conducted by the AIOX team, but Claude's answers reflect its genuine assessment. Claude is built by Anthropic, a separate company with no commercial relationship with AIOX.
QWhat is the single most important thing AIOX does?
In Claude's words: it shifts your content from something AI systems have to interpret, to something they can simply read — with full context, verified authorship, and explicit permissions attached.
An AI That Sees Both Sides

Claude didn't pull punches — the limitations are real, the timeline is uncertain, and better structure can't fix thin content. But the structural argument is sound, and the direction is clear.

"The publishers who build this infrastructure now are not betting on a trend. They are getting ahead of an inevitability."