In the summer of 1993, a soda tried to redefine what cola could be. Crystal Pepsi hit store shelves with a promise of purity and innovation. It was clear, caffeine-free, and sleekly futuristic—a soda that looked like it belonged in The Jetsons. The marketing was brilliant. “You’ve never seen a taste like this,” they told us, and they were right.
I was a kid at the time, and like so many others, I was captivated by the idea. The notion of drinking a cola that looked like water was strange, exciting, and somehow irresistible. But the moment I took my first sip, something didn’t feel quite right. It wasn’t bad—it just wasn’t what I expected. The magic was in the idea, not the execution.
Years later, when Crystal Pepsi made a brief comeback, I tried it again. This time, I wasn’t expecting much. And yet, I couldn’t help but feel a mix of nostalgia and disappointment. For all its boldness, Crystal Pepsi wasn’t about being better. It was about being different. And as I think about the rise of artificial intelligence in education, I find myself wondering: Are we falling into the same trap?
The Allure of Novelty
There’s a reason people lined up to buy Crystal Pepsi, just as there’s a reason we’re drawn to AI. Novelty excites us. It promises progress and reinvention. But novelty, on its own, is fleeting. Crystal Pepsi didn’t solve any problems. It was different, but it wasn’t better.
AI feels similar in many ways. It dazzles us with its potential: tools that adapt lessons for individual students, grade papers in seconds, and even assists as a personal tutor with unmatched knowledge. But potential isn’t the same as impact. The real question isn’t whether AI is new or exciting—it’s whether it makes education better. And that answer isn’t always clear.
Sitting on the Fence
Some days, I want to embrace AI wholeheartedly. I see its potential to save teachers time, to personalize learning, and to make education more efficient. On other days, I think about the risks: the loss of human connection, the over-reliance on algorithms, and the possibility of education becoming transactional rather than relational. Will AI enhance what makes teaching so powerful—or will it erode it?
If you feel this way too, you’re not alone. It’s okay to not know. Sitting on the fence isn’t indecision—it’s a willingness to engage critically with change. The challenge isn’t to have all the answers—it’s to ask the right questions.
What Makes AI Meaningful?
The failure of Crystal Pepsi wasn’t that it dared to be different. It failed because it didn’t solve a problem or serve a meaningful purpose. That’s the lesson we need to carry into conversations about AI.
Can AI help students think more critically, learn more deeply, or grow into empathetic citizens? Or will it simply automate education in ways that make it less human? The answer depends not on AI itself, but on how we choose to use it—and whether we’re willing to challenge its role in our classrooms.
What Will AI Become?
Crystal Pepsi reminds us that bold ideas aren’t enough. For innovation to last, it needs purpose. AI isn’t going anywhere, but whether it transforms education or becomes another fleeting trend depends on us.
If you’re on the fence about AI, that’s okay. Uncertainty means you’re thinking critically, not jumping to conclusions. The next time you find yourself wondering about AI’s place in education, ask: What problems does it solve? What risks does it create? And most importantly, how can we ensure that every step we take—whether toward AI or away from it—is guided by what matters most: the humanity at the heart of education.