
I knew almost nothing about oceanography until 7 this morning. Then, after a run on the Miami Beach boardwalk, I kicked off my shoes, stepped into the surf, and in a few minutes felt confident I understood the Gulf Stream’s effect on shark migration.
Psychologists call this the illusion of explanatory depth. We skim the surface of a subject just enough to get the gist and come to believe we know more than we actually do about inflation, Ukraine, tariffs, or quantum electrodynamics.
To avoid this error and spot the gaps in our understanding, the physicist Richard Feynman suggested trying to explain it to a 10-year-old. A better option, especially when no willing fifth graders are available, is to use AI (I’ll show how in a moment). Of course, depending on how you use it, AI can also make the problem worse.
It doesn’t help that so much in this age of distraction is already designed for skimming. To be read with our thumbs. I take some blame, having led digital newsrooms for over a decade, competing for sloppy milliseconds of attention with better dopamine dealers than the news media.
Journalism has always been an art of compression, a distillation of the days’ absurdities into the space of a few hundred words or less. We optimized information to go down easy. Maybe too easy. One of the chief ways tech and media are deploying AI is, alas, generating even more summaries, reducing everything into more bullet points.
The trouble is that bullets kill. They kill retention of information. They kill understanding. They kill the discomfort of going deeper.
We deserve an AI for effort. An AI that helps us work harder. An AI that boosts our knowledge, pokes holes in our understanding, challenges assumptions, and grills us to get better.
It will be tempting to do the opposite. To use AI as a GPS for life. At some point we may all have a built-in teleprompter telling us in real time the optimum thing to do or say in every conversation, class assignment, job interview, argument or date. But then everyone loses. Everyone gets atrophy.
This will be on the test
Most of what we hear or read soon slides from memory down the “forgetting curve.” We lose something close to half of new information within an hour, and 70% or so after a day, a century of studies suggest. The rest gradually fades over time (with the exception, sadly, of annoying pop song lyrics and my high school locker combination).
It is possible to remember much more of what we read.
Around the same time ChatGPT was released in November 2022, I read the book “Make it Stick,” by Peter Brown and the psychologists Henry Roediger III and Mark McDaniel, who did pioneering work on learning, memory and study habits.
All the research shows that if you want to remember how to get to Carnegie Hall you need to practice. New information is a use-it-or-lose-it situation.
“A single, simple quiz after reading a text or hearing a lecture produces better learning and remembering than rereading the text or reviewing lecture notes,” the authors explain.
This is called retrieval practice. Digging facts and ideas from our memory helps them take up long-term residence. Friction helps too, the authors say:
Many teachers believe that if they can make learning easier and faster, the learning will be better. Much research turns this belief on its head: when learning is harder, it’s stronger and lasts longer.
Frustrated by how little I remembered of all the information I consume these days, I started asking ChatGPT to quiz me on the books, articles and research I read, on ongoing stories like the war in Ukraine, and topics in economics such as trade policy.
I began with multiple-choice questions. LLMs are pretty good at this, as many teachers have realized. Here’s one based on Wednesday’s story about whether President Trump will fire Fed Chairman Jerome Powell.
Trump floated a possible justification for firing Powell that referenced an ongoing project. What about that project, according to the article, made it especially vulnerable to Trump's attack—and how has it been characterized?
A) It was a classified intelligence program that overran its schedule
B) It was a military procurement scandal with foreign contractors
C) It was a nearly 100-year-old building renovation project with ballooning costs
D) It was a public–private partnership with major donors to the Democratic Party
Open-ended questions seem to work better, since they take more effort to answer – what education psychologists call desirable difficulties – and because they require putting ideas into my own words.
For example:
Summarize why removing a Fed chair for policy disagreements is legally difficult in the U.S. system. Why do you think the law was set up that way? Do you see any potential advantages or disadvantages to this rule?
I found two to three of these open-ended questions a day made details stick. And since LLMs have a memory of prior conversations and the topics and articles I am trying to deepen my understanding of, I was able to space out questions on a range of stories to keep me guessing – the sort of irregularly scheduled quizzing the research has found most effective.
Sometimes LLMs hallucinate and get things wrong. But pointing out those mistakes ended up sharpening my memory further.
(Note: Trying to explain the Trump/Powell story to my sons was still a challenge.)
I also started reading a poem a day with ChatGPT to practice close reading and analysis, a daily habit that jumpstarts the heart and mind before the coffee and reality kicks in.
My own personal Socrates
Quizzes work. But where LLMs really shine is not as test proctors—but as thinking partners and tutors.
Sal Khan, CEO of Khan Academy, wants to give every student their own Aristotle. I know I’m no Alexander the Great – so I chose Socrates instead. I began to prompt AI to play the philosopher after reading law professor Ward Farnsworth’s book, “The Socratic Method.”
Plato didn’t write the dialogues as a manual for arguing with others—but for arguing with oneself. As Farnsworth explains: “You challenge yourself and harass yourself and test what you think and deny what you say.”
Socrates said he was a midwife to wisdom, helping (some) of his interlocutors birth knowledge and insights, as well as the desire to punch him in the face.
As a writer I live by the mantra that “writing is thinking.” Socrates, alas, wrote what he knew: nothing. He believed talking was thinking. Sometimes it is an internal dialogue “the mind has with itself about whatever it is investigating,” he says in the Theaetetus.
All knowledge is a conversation—and journalism can help lead it
Unlike Socrates, today we all know everything – or can Google it – and are unafraid to share our opinions. I have found my internal Socratic dialogues with AI help challenge my most stubborn assumptions. I also sometimes want to punch ChatGPT in the face.
Many of Plato’s dialogues never arrive at an answer. They end in a sort of stalemate and confusion – an impasse called aporia. This is uncomfortable and disorienting – but that’s a feature, not a bug. A way to beat back superficial thinking and the illusion of explanatory depth.
I’ve been experimenting for years to find ways to make journalism a better tool for thinking and learning. Great reporting and storytelling remain our strength, but our best opportunity with AI is not summaries or bullets. We need to shake off 19th century story formats for something that works harder to help readers learn, retain information, and embrace the discomfort.
I know I’ve just dipped my toe in the water – and there’s an ocean I’m clueless about. If you have ideas or want to compare notes, please make the effort to reach out on LinkedIn or by email. Just don’t punch me in the face.