Here is another attempt to think about AI more than a few weeks into the future.

I started thinking about this after reading The MANIAC by Benjamín Labatut. In case you’re unfamiliar, John von Neumann was the man behind an astonishingly long list of contributions to humanity, including game theory and the modern computer. If I had met him and we disagreed, I would assume that he knew more, considered more, and analyzed it better than I did—that I was only seeing part of the picture.

The book reminded me of a story Edward Teller once told about von Neumann. Teller, a scientific giant of the century in his own right, recalled:

“Von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.”

Von Neumann went on to advise the highest levels of government, especially on defense. In many ways, he helped shape the Cold War—a conflict that brought the world to the brink of destruction, yet had no casualties. If I had met him, and we disagreed, I would assume that he knew more, concidered more, analyzed it better than me. It would be clear to me that I only saw part of teh picture. Would you have had the nerve to ignore his advice if you didn’t understand it?

In many ways, we already put a surprising amount of trust in AI without thinking twice. It suggests the next word when we’re typing, sorts our emails, edits our photos, determines our creditworthiness, sets insurance premiums, steers our cars, diagnoses medical conditions, filters job applications, and more. And most of the time, we don’t question it.

For now.

I think we trust it because each time we stop and examine the recommendation, it makes sense to us, so we go with it.

That won’t last. Not for long at all at the current pace.


The Move 37 Moment

In 2016, the world watched as an AI named AlphaGo made history by beating a top human Go player. But what really stood out was “Move 37”—a move so unexpected that even experts thought it was a mistake. It didn’t follow traditional strategies, and no human would have played it. But as the game unfolded, it became clear that the move was not just good—it was game-changing.

It was a reminder that AI doesn’t think like us. It doesn’t have to. It can find solutions we’d never consider, ones that seem wrong in the moment but turn out to be brilliant.

That moment wasn’t just about Go. It was a glimpse into a future where AI doesn’t just copy what we do—it surprises us, challenges us, and makes choices we never saw coming. It recommends moves that we can’t help but see as mistakes.

And that raises a big question: What happens when AI makes a Move 37 in other areas of life?


The Move 37s to Come

Soon, we will see Move 37s in all AI applications, and we are not ready for it.

We will know it has been right every time before (it beats Magnus Carlsen as easily as he would beat me), and we will need to decide whether to accept these Move 37s or not.

What if an AI recommends a completely unheard-of surgery that seems unnecessarily risky?
Tells us to vote for RFK?
Fires a beloved CEO (while your competition listens to its AI)?
Tells you not to send your child to school for a week?
Swerves into traffic?
Tells you to break up your relationship?

I don’t know what I’ll do. Do you?