For as much as my university colleagues are talking about how AI affects students, and how it’s either sharpening or dulling their cognitive tools for research, I find it curious how little the students themselves are actually using AI or even talking about it. When I brought the topic up with my freshmen, one of them said, “When you say AI, do you mean TikTok?”
That response startled me, but it didn’t entirely surprise me. I work with students ranging from middle school to college: teens and young adults who are bright, creative, curious, and digitally native. They live online. They edit videos, write fanfiction, build memes, and scroll endlessly. They’ve never known a world without the internet. So I assumed, perhaps naively, that when ChatGPT exploded onto the scene, they’d have thoughts, opinions, even fears.
What I’ve seen instead is something more slippery; a kind of casual indifference. AI is in their world, sure, but it doesn’t seem to register as world-changing, at least not in a way they can name.
Surface-Level Familiarity
Most of the students I work with know about AI in the same way they know about autocorrect or Spotify recommendations: it’s background noise. They joke about using ChatGPT to write essays. They’ve seen their favorite YouTubers feed prompts into image generators. They might even follow meme pages that poke fun at AI’s awkwardness.
When I ask how they feel about it—what it means for their future, for creativity, for work—I get blank stares, or shrugs, or “I don’t know, I guess it’s just part of life now.”
This isn’t ignorance. It’s ambient awareness without urgency. Which, ironically, might be even more dangerous.
Apathy or Adaptation?
There’s a fine line between not caring and not questioning because something feels inevitable.
What I’ve come to believe is that many young people are already adapting to AI, but without the language or guidance to examine what that adaptation means. They are, in a sense, growing up alongside the machine and assuming this is simply how things are. As tech philosopher Douglas Rushkoff puts it, “We are living in a world that is no longer about us. We are living in a world designed for technology” (Rushkoff, Program or Be Programmed, 2010).
To them, AI isn’t a disruption. It’s just Tuesday.
What Schools Aren’t Teaching
One college student told me, “We never really talk about AI in class unless it’s to say don’t cheat with it.” This reflects a larger issue: many schools are still struggling to update their policies on AI use, and even more so when it comes to adapting their teaching methods. Instead of exploring AI as a tool for learning, the focus tends to be on warning students about using it dishonestly.
While some educators are doing meaningful work to incorporate tech conversations, many schools, especially in the humanities and arts, haven’t integrated AI into their curricula at all. When AI is addressed, it’s often treated as a threat: “Don’t use this to plagiarize.” But that’s not education; it’s a warning label.
Topics like algorithmic bias, the ethics of automation, surveillance capitalism, copyright confusion, and the commodification of creativity are rarely discussed, yet these are exactly the areas that today’s students will inherit. The limited discourse tends to be reactive rather than proactive. In many cases, teachers themselves (me included!) are still figuring out what these tools mean.
And there’s a gap here that’s worth naming: students are increasingly using AI informally (for brainstorming, summarizing, solving equations), but they’re not being taught how to assess its limitations, how it was trained, or what implications it carries. Without structured critical thinking exercises or media literacy units built around AI, students are left to sort fact from fiction on their own. Unsurprisingly, many disengage altogether.
Even though organizations like Common Sense Media and UNESCO have called for AI literacy education (UNESCO, Guidance for Generative AI in Education and Research, 2023), most students are still being handed tools without blueprints. They’re digital natives, but that doesn’t mean they’re digitally literate.
In a discussion with my college freshmen about potential dangers in using AI, one of the students astutely said, “I don’t fear being repetitive, I fear never being able to say something unique because everything has already been said.” Philosophically, I empathized with her statement. I think in some ways we all feel this. But what struck me was that I wondered if she was right.
One of my high school students told me that his father works with AI software and let him use it to write an essay for school—not one he actually turned in, but as a means to demonstrate how AI generation works. The student’s final analysis was that it caused him anxiety. He said, “How can I ever write anything that will be truly helpful to the world? I feel like my brain would have to speed up and get to the point more quickly than AI, and I don’t think that’s possible.” Another student responded, “Calm down, bruh. Just keep playing The Last of Us.” The class laughed. I laughed too. But I also felt a sense of foreboding that I didn’t want to introduce into these fifteen- and sixteen-year-olds.
A Creative Way In
What’s worked best in my world isn’t lecturing about AI ethics; it’s storytelling. And more specifically, asking “what if” questions that make the abstract personal.
For example:
- “What if an AI wrote your favorite show, and it was good enough that you didn’t notice?”
- “What if your voice was cloned and used in a YouTube ad you never recorded?”
- “What if your college application essay was flagged because someone assumed AI wrote it?”
- “What if AI generated a fake video of you doing something you didn’t do?”
These questions shift the conversation from distant tech talk to immediate personal stakes. I’ve watched students, middle schoolers even, go from smirking to stunned in a matter of seconds when shown a real deepfake. It’s not just about explaining what generative AI is; it’s about helping them feel the implications of it.
Creative expression helps unlock that shift.
In one class, I asked students to write short monologues from the perspective of someone living in a world where human art is outlawed because AI does it faster. The results were moving. Several wrote about grief. Some wrote about rage. One student wrote about forgetting what real creativity feels like: “I lifted my hand to paint a flower, and the petals reminded me of a flower I saw online. I stopped seeing the real flower and tried to paint the one I remembered instead.”
I don’t know about you, but that still gives me goosebumps.
This kind of imaginative work invites empathy, agency, and reflection—all of which are in short supply when the conversation stays stuck at “AI is just a tool.”
Art-based learning has always been a mirror to society. When we let students look into that mirror through theatre, creative writing, or design, they begin to see their own digital landscape more clearly.
The Urgency of AI Awareness
Middle schoolers, high schoolers, and college students are not just future workers in an AI-saturated economy. They are future parents, pastors, teachers, lawmakers, and ethicists. If they are passive now, the consequences will be exponential later.
And here’s the thing: they don’t need to become experts. They don’t even need to have polished positions. But they do need space to ask questions, and adults who are willing to ask those questions with them.
The rise of AI in their lives is not a looming threat on the horizon. It’s already here, shaping how they search, think, interact, and create. If we want them to be active participants in this moment rather than silent subjects of it, we would serve them well to begin where they are: with curiosity, with context, and with imagination.
The future of AI won’t be written by algorithms. It will be written by the choices we make and by whether we prepare students to shape what comes next.
