The View from Higher Up
Why fear and familiarity shape how we see AI in education
My son and I were watching Alex Honnold's free solo climb of the Taipei 101 skyscraper on Netflix.
Within minutes, my stomach was in knots.
The higher he climbed, the harder it was for me to watch. My palms were sweaty. My jaw was clenched. I found myself looking away, not because I didn’t trust Honnold, but because I couldn’t tolerate the risk.
My son, on the other hand, was completely calm.
No nerves. No flinching. Just curiosity.
He’s a climber. He understands movement, grip, balance, and the hundreds of micro-decisions that happen on a wall. What looked reckless to me looked precise to him. What felt terrifying to me felt familiar.
We weren’t watching the same thing.
And that’s when it clicked.
This is exactly what’s happening right now with AI in education.
Many adults are watching AI the way I watched that climb, focused on what could go wrong, rather than wondering what we might learn by watching more closely. The fear isn’t irrational. The risks are real. But fear has a way of narrowing our vision.
Students, meanwhile, are watching something else entirely.
They’re not thinking about existential risk or intellectual decay. They’re noticing speed. Access. Possibility. They’re asking a different set of questions: What can this help me do? Where might this take me?
Neither perspective is wrong. But they are profoundly misaligned.
What made the difference in that moment wasn’t bravery. It was familiarity.
My son didn’t feel safe because the climb was safe.
He felt safe because he understood the terrain.
That’s the part we’re missing in education.
Too often, our response to AI has been to stand at a distance, writing policies, issuing warnings, and drawing lines before we’ve taken the time to ask better questions. We’re trying to manage uncertainty without first building understanding.
And students can feel that.
They know when adults are speaking from experience and when we’re speaking from anxiety.
The skill students most need right now isn’t blind trust in AI or blanket rejection of it. It’s discernment. Judgment. Knowing when a tool sharpens thinking and when it quietly erodes it.
But discernment doesn’t come from prohibition.
It comes from proximity.
You don’t learn how to climb by staying on the ground and pointing out the risks. You learn by getting closer. By understanding the holds. By recognizing which moves are solid and which ones aren’t worth taking.
AI isn’t the cliff face we should be staring at in fear.
It’s the terrain we need to understand well enough to help students navigate thoughtfully, ethically, and humanely.
Maybe the question isn’t whether AI is safe or dangerous,
maybe it’s what we’re missing because we’re afraid to get closer.


Once again so wonderful. Thanks Nate. Just restarted on LinkedIn. Appreciate your reflections and YOU!