AI Doesn’t Have a Hallucination Problem. It Has a Provenance Problem.
- Clayton Johnson

- Dec 5
- 2 min read

A recent Tow Center study tested 8 leading AI search tools on a simple task: match 1,600 news snippets to the correct source. Over 60% failed. Many confidently cited the wrong publisher, wrong date, or a dead link.
That’s the open web, with public URLs and major publishers.
Now imagine the same inside an enterprise, where “final” documents exist in 10 versions across email, drives, SharePoint, and vendor portals. When AI doesn’t know the right document, it will always give the wrong answer.
Hallucinations aren’t the disease. Bad provenance is.
Leaders at IBM, Forbes, and Harvard are all saying the same thing:
If you don’t know where your information comes from, AI becomes a liability, legally, financially, and operationally.
In the Real World, Knowledge = Documents
Especially in CRE: leases, amendments, JVs, estoppels, rent rolls, reports.
LLMs and RAG only work if they’re grounded in the authenticated version of these documents.
Most organizations aren’t there yet.
Multiple conflicting versions
No clear “final copy”
Sensitive docs in the wrong places
No traceability back to the true source
Point an LLM at this mess and you get beautifully phrased answers built on the wrong foundation.
The Missing Layer: A Document-of-Record System
This is why we built AI.DI around one core idea:
Before you deploy AI, you must know — continuously and provably — what the document of record is for every critical fact.
That requires three things:
Document Governance
A controlled, normalized, auditable backbone.
Assurance & Provenance
Fingerprinting every version, certifying the true source, monitoring drift.
Exchange & Readiness
Validating inbound docs, tracking gaps, and packaging certified sets for audits, financings, and transactions.
Only then should AI touch your content.
What You Get
With a real document-of-record layer:
AI retrieves from certified sources only
Every answer can point to its exact supporting document
Auditors and boards can trust AI outputs
You can trace any answer → document → system of origin
This is what regulators and practitioners now demand: AI that is grounded, traceable, and dependably right.
If You Can’t Name the Document, You’re Not Ready for the Model
Enterprises are about to repeat the mistakes seen in the Tow Center study, plugging powerful models into ungoverned content and hoping for the best.
The fix isn’t bigger models.
It’s stronger provenance.
Decide the document of record.
Govern it.
Certify it.
Then let AI reason on top of it.
That’s the gap AI.DI is closing for CRE: Not just smarter AI, provably correct AI.
Ready to learn more? Visit us today at https://www.imkore.com/aidi.


