Why seek the Unknown

I don’t post often, but after rambling in my editor and fresh off a rewatch of Interstellar, I felt compelled to share this. This isn’t a film review. It’s a reflection on a question that got stuck in my head as the credits rolled:

Why do people choose to seek the unknown when they already have something to hold on to?

There are many possible answers- love, glory, curiosity, fear, empathy, loneliness1. Maybe that impulse to step off the map is precisely why Homo sapiens survived and built so much in so little time.

There’s a lot of noise online about “researchers vs. engineers”2, but beneath we’re still the same primates with oddly shaped tools, drawn toward black boxes we don’t yet understand. Being in AI, I can’t resist the analogy: sometimes we choose to explore instead of exploit, a little DQN in the bloodstream.

Local minima and the itch to look up

We’re all protagonists in our own stories, and it’s easy to get trapped in a local minima(sorry for the ML reference) and forget the direction we meant to move. Rationally, exploration is a terrible bet: the probability of failure often dwarfs the chance of success. Most of us will fail. And yet, some still choose to look beyond.

It’s easy to romanticize the leap. What’s harder is not stopping once the loneliness sets in, when you miss love, when you are just a lost face in crowd. That’s where the journey ends for most of us. People say, “In the real world, other things matter more.” The irony is that much of what we call “real” is just the majority’s interpretation. The objective world is there either way; our choice is whether to exploit what we’ve got or explore what we might find. Most will exploit and that’s fine. Societies need stability. But the few who can’t ignore the itch will pull for the rest of us.

On “real” intelligence

Right now, there’s an AI race and lots of talk about AGI on the horizon. But who gets to certify what counts as “real” intelligence? Perhaps we’re just searching for systems that sometimes choose to explore,driven by math, emotion, or something that looks like both. That blend is hard to design and harder to prove. I think we’ll eventually build systems that earn the label “real” intelligence not because they predict the next token, but because they decide to keep going when prediction is uncertain—and then push the horizon further.

Closing

If this sounds like it was written under the influence of Interstellar and a stack of RL papers, that’s because it was3. Maybe it won’t make perfect sense to me later. But today it does, and that’s reason enough to hit “publish.”


Notes & References


  1. These motivations all surface in Interstellar in different forms-love, ambition, fear, isolation-guiding choices across uncertain frontiers. 

  2. “Researchers vs. engineers” debate. Musk. 

  3. Author’s note: written during a phase of deep RL study and a fresh Interstellar rewatch.