Generative AI is sliding into almost every corner of daily life, from work and entertainment to the way we plan meals and shop. With all that convenience, it is easy to assume everyone is thrilled about the future. Yet a growing number of younger people are watching the rise of AI with a particular kind of unease. Their worry is not only about what machines can do, but also about what constant machine help might do to them.
Scott Anthony, a former McKinsey analyst who now teaches at Dartmouth, says he keeps running into the same reaction among students. Instead of excitement, he sees a sense of dread when the conversation turns to large language models. What surprises him most is how hesitant many students are to use these tools at all. For them, the fear goes beyond the usual academic concerns like cheating and shortcuts, and lands on something more personal.
In Anthony’s view, many students are anxious about handing over their thinking to a system that always has an answer ready. They worry that leaning on AI for writing, problem solving, and even brainstorming could slowly dull their ability to judge, question, and connect ideas on their own. He describes them as genuinely scared, and he suggests the discomfort is tied to a deeper fear of losing something human if reliance becomes automatic. Big technological shifts, he adds, tend to feel messy and chaotic while they are happening.
That anxiety looks different on the other side of the classroom. Anthony contrasts his students’ caution with the enthusiasm he often sees among tenured professors who are eager to test the newest tools. The split is not hard to understand. Established academics usually have stable careers at prestigious institutions, while students are stepping into a job market that feels more uncertain as AI capabilities expand.
Research also hints that the students’ instincts are not random. A study out of MIT divided participants into three groups for an essay writing task. One group used large language models, another used standard search engines, and a third wrote without outside tools, essentially a brain only approach. The AI group found it easier to produce the essay, but the convenience came with a trade off, as researchers noted a reduced tendency to critically evaluate the system’s output.
Meanwhile, the group that wrote without AI reported greater satisfaction with their work and showed stronger brain connectivity than the others. Taken together, it suggests that effortless assistance can tempt people into accepting information rather than actively wrestling with it. Gen Z may be picking up on that risk early, and they are not wrong to want guardrails.
What have you noticed in your own life when you use AI for everyday tasks, and where do you draw the line between helpful support and mental autopilot? Share your thoughts in the comments.




