- The Leap
- Posts
- Gell-Mann Amnesia
Gell-Mann Amnesia
Why You Trust AI More Than You Should
Newspapers
My investment career began almost three decades ago at a large public pension fund. Our "sister" fund in the same city was in the middle of a crisis. Constituents were protesting because, for the first time, they would be required to pay part of their healthcare premiums. Up until that point they had paid nothing.
This wasn't unique. Defined benefit plans across the country were reaching similar conclusions. The funds weren't mismanaged. The pension math just didn't work. Benefits had been too generous for too long relative to the returns available on the assets supporting them.
Facts aside, the constituents were angry. Along with protests, they waged every kind of accusation at the plan administrators.
I didn't work at that fund, but we shared several board members. I knew many of the people there and our funds operated almost identically. For all purposes, I was an insider.
One morning the local newspaper ran a scathing editorial against the fund. Reading it, I was stunned. Not by the criticism, but by how wrong it was. Critical facts and context were absent. The article didn't seem interested in representing reality.
When I mentioned it to my boss, he simply smiled and said, "You only recognized the absurdity because you're an expert. Remember that the next time you read an article about something you're not."
He didn't know it, but my boss was describing what author Michael Crichton later coined as Gell-Mann Amnesia.
The core observation: we catch errors in reporting on subjects we understand, then forget that lesson entirely when we turn the page.
Gell-Mann Amnesia
Crichton coined the term in a 2002 speech called "Why Speculate?" He named it after his friend, Nobel Prize-winning physicist Murray Gell-Mann, mostly as a joke. Attaching a famous name made the idea sound more important than it was.
But the idea itself is serious.
When we encounter reporting on a topic we know well, we see the errors immediately. The missing context, the oversimplifications, the conclusions that don't follow from the facts. We recognize the source is unreliable.
Then we move to the next topic, one we don't know as well, and treat the same source as if it suddenly became credible.
This was a media problem for decades. Now it has a new host.
AI tools have become the default assistant for millions of professionals. And the Gell-Mann pattern is playing out in real time.
Ask a tool to do something in your area of expertise and you'll see what it gets wrong. A financial analyst catches the model that ignores cash flow timing. A lawyer spots the fabricated case law. A developer finds the function that compiles but fails on edge cases.
These professionals walk away skeptical. And they're right.
But then that same professional asks the tool to draft a marketing strategy, summarize a legal filing, or interpret medical test results. The output reads well and sounds confident.
So they accept it. Often without any verification at all.
This is Gell-Mann Amnesia for AI. And unlike the newspaper version, you're not just consuming bad information passively. You're acting on it. Making decisions with it. Sending it to clients. Sharing it with your family.
Avoiding AI Amnesia
What follows is a framework for avoiding the trap: three practices that help you maintain the same standard of skepticism whether you're working inside your expertise or outside of it.
1: Know Where You Stand
The first step to avoiding Gell-Mann Amnesia with AI is being honest about what you actually know.
This sounds obvious. It isn't. When you use AI inside your area of expertise, you have a built-in filter. You don't think of this as a skill because it feels automatic. But that automatic filter is exactly what disappears when you step outside your domain.
Before acting on any AI output, ask yourself one question: Could I have written this myself?
If the answer is yes, you're in a strong position. You can use the output as a starting point, catch errors, and improve on it. AI is saving you time.
If the answer is no, you're in a different situation entirely. You're relying on the tool to be right because you have no way of knowing when it's wrong. That's when you need to slow down.
A simple way to think about it: label every AI interaction by your competence level:
Expert - you can verify the output against your own knowledge
Familiar - you can spot major problems but might miss nuance
Novice - you're flying blind
Most people treat all three the same even though they shouldn’t.
The less you know about a subject, the more careful you need to be with the output.
2: Treat Every Output Like A First Draft
When AI generates something in your area of expertise, you naturally treat it as a first draft. You read it, fix the errors, and move on.
The discipline is applying that same posture to everything else.
This is harder than it sounds because AI output reads well. The grammar is clean, the structure is logical, and the tone is confident. When you don't know enough to evaluate the substance, those surface-level signals trick your brain into thinking the content is accurate. Fluency is not the same as accuracy.
A few ways to maintain the first-draft mindset outside your expertise:
Ask for options instead of answers. When you ask for three approaches with tradeoffs instead of a single recommendation, you're forced to think.
Run it past someone who knows. AI didn't eliminate the need for domain experts. It just made it easier to forget you need them.
Ask the tool to argue against itself. It won't catch everything, but it can surface issues you wouldn't have thought to look for.
If you wouldn't trust a new hire's unreviewed work on something important, don't trust AI's either.
3: Build The Reflex
The first two practices require awareness. This one requires habit.
Gell-Mann Amnesia persists because skepticism is exhausting. It's easier to accept information that sounds right than to verify it. You can't verify everything. But you can build a short checklist that catches the most dangerous mistakes before they become decisions.
Before acting on AI output in an unfamiliar domain, run through these:
Can I explain why this answer is correct in my own words? If I can't, I don't understand it well enough to act on it.
Have I checked at least one independent source? A five-minute search is often enough to confirm or contradict the key claims.
Am I accepting this because it's accurate or because it's convenient? Convenience is the engine of Gell-Mann Amnesia.
Would I trust this same tool's output in my field without editing? This is the Gell-Mann test. If you'd spend twenty minutes correcting AI output on a financial model, assume the output on a legal question needs the same scrutiny. The tool didn't get smarter between prompts.
These take minutes. But turning them into a reflex is the difference between using AI as a tool and being used by it.
The Next Step
The Gell-Mann Amnesia Effect is not new. It's been around since newspapers. But AI made it faster, more personal, and higher-stakes.
The professionals who will use AI most effectively over the next decade won't be the ones who trust it the most. They'll be the ones who've learned to distrust it consistently, including on topics where the errors are invisible to them.
The antidote isn't avoiding AI. It's refusing to give it more credibility on unfamiliar topics than you would on topics you know cold.
Know where you stand. Treat every output like a first draft. Build the reflex to verify before you act.
My boss gave me that advice three decades ago in a different context. It's more relevant now than it's ever been.
My goal with The Leap is to provide you each Saturday with the knowledge, tools and lessons learned to help you get started and keep going toward building your future.
Whether you are making the leap to startups, solo-entrepreneurship, freelancing, side hustles or other creative ventures, the tools and strategies to succeed in each are similar.