AI and Higher Ed Part 1: Advice for Undergrads
In this short piece, I’m going to share some advice on how undergrads should engage with AI. By now, many people have talked about how AI is transforming — or is on the verge of transforming — almost every aspect of our lives. I’ll skip over that big-picture stuff and get straight to the crux of my message.
I’ll start with a couple of general points that I think everyone in higher education needs to accept about AI, and then I’ll move on to my advice specifically for undergrads. Before I continue, let me note that these are very complex issues, and I’m maybe only about 65% confident that everything I say here is solid. So, please take my suggestions with a pinch of salt.
Some starting points everyone in higher ed should make peace with
I want to begin by highlighting two fundamental premises about AI in higher education that, in my view, everyone just has to accept.
Premise 1: The most cutting-edge AI models are in fact very good
I’m not entirely sure what most knowledgeable people working in higher ed think about this point. My observations suggest that quite a number of the higher-ed aristocracy still don’t believe AI is all that great or useful, so I think it’s important to start with this premise.
I’ve been testing out OpenAI’s Deep Research (which I believe is the most cutting-edge research model available) for around 2 months now, and I’m pretty darn impressed. Yes, it still has its flaws (as I’ll explain in the next point). But in the hands of the ‘right’ person, I think it can considerably reduce the amount of time it would otherwise take to produce a high-quality paper. If someone were to say that base models like ChatGPT or Claude aren’t particularly useful when you’re trying to produce stellar work, I would largely agree with that perspective based on my experiences.
However, I think it would be unreasonable — maybe even dishonest — for anyone to make the same claim about more advanced models like Deep Research. I’ve found Deep Research very useful for doing basic literature reviews, for identifying weaknesses in my work, and for generally jousting about all sorts of things that would previously slow me down for days (e.g. debating what’s the best terminology to use for an AI that’s close to human-level but not quite there). I’m very certain that any student would find it just as useful in their work.
In the weeks to come, I intend to write a longer post about my experiences with Deep Research. But for now, I’ll move on from this point and leave you with one screenshot I came across on X (Twitter) that really drives it home:
Premise 2: The most cutting-edge AI models remain fallible
It seems to me that this second point is more broadly accepted, except perhaps by the AI developers who have a vested interest in their models being perceived to be the best. Even so, I think it’s important to hammer it home, because it’s really easy for anyone to start fully trusting an AI model that’s correct about 90% of the time. To illustrate the danger of not double-checking an AI’s output, let me share a couple of my own experiences.
In one case, I asked Deep Research to find useful literature about the international governance of AI. Out of habit, I scrutinized each of the sources the model had cited — and alas, I found one that was completely made up! As you might imagine, when I tried to follow up on that bogus citation, the AI dodged my questions expertly. (See screenshot below.)
This kind of thing has happened to me a handful of times now. In another recent example, my research partners Jean and Grace and I asked Deep Research to find evidence on whether leading AI companies believe U.S. antitrust law hamstrings their ability to collaborate on AI safety. One section of the resulting report claimed that U.S. policymakers were considering an idea known as a “safety haven” as a policy tool for addressing this issue. We combed through all the sources it cited, and none of them actually said that.
Apart from errors like these, I’ve also found that the output will occasionally be quite broad or generic (yes, this is even after asking some very specific questions) — especially when I ask things that require high-level knowledge synthesis or truly original thinking. The point is, these models are fallible, sometimes in pretty serious (even fatal) ways.
Advice for Undergrads
Given the above realities about AI, how should you, as an undergraduate student, approach the use of AI tools in your education? Let me share three pieces of advice.
Advice 1: Prioritize learning the basic knowledge and skills you’ll need to scrutinize AI output
First, you need to prioritize learning the basic knowledge and skills required to assess any AI output. What does this mean exactly? It means developing a good hunch for when an AI’s answer is questionable or needs more investigation. You can’t have that kind of instinct if you lack certain fundamental knowledge and skills. The basic skills really boil down to strong critical thinking while the exact “basic knowledge” you’ll need depends on the field of the question and the output. For example, if you’re trying to figure out how AI liability should be decided in court, you’d probably need at least a basic grasp of how liability law works — its general principles, structure, and policy considerations.
To be clear, I’m especially talking about high-stakes situations related to important work in your field of study (assuming you actually care about learning it) or a field you want to work in down the line. Obviously, you can’t possibly have background knowledge on every field related to every random question you might ask an AI, and I’m not suggesting you try.
Overall, it’s very risky to rely purely on AI (i) for important work or (ii) as a substitute for actually learning and refining the skills you’ll need to evaluate AI output in the future. If you do that, you could end up with no useful skills of your own — Meaning you’d be exactly the kind of person that the more advanced AI that’s on the way could replace first. So you need to be thoughtful and responsible about knowing when to take the long road (doing the work yourself) and when it’s okay to use AI as a shortcut. For instance, if you’re assigned an important piece of work that’s designed to teach you the very knowledge or skills I’ve been talking about, try not to rely on AI for it. Otherwise, you won’t learn what you need to learn, and you’ll basically have zero ability to know when AI isn’t giving you the best possible answer.
But how do you know if a particular assignment or task will really help you build those fundamental skills and knowledge? Isn’t all the work you’re assigned in university meant to do just that? These are tough questions, and only you can answer them for yourself. However, I’d suggest thinking about it in terms of a few key questions:
- What exactly might I learn by taking the long road on this assignment (doing it myself) versus leaning heavily on AI?
- Do I already mostly have those skills or knowledge, and have I been using them frequently?
- If I let the AI handle this, will I be skipping a valuable learning experience that I’ll need later on?
After reflecting on these points, if you realize that doing the work yourself will help you gain or sharpen an important skill you’re currently missing, then grit your teeth and do it without heavy AI assistance. On the other hand, if you’re confident that you’ve already mastered what you’d learn from it, then using AI as a helper is less of a problem.
Advice 2: Watch out for High AI Dependency
Skills can become rusty. If you haven’t done any academic research in 10 years, chances are you won’t be as good at it as someone who’s been doing it non-stop that whole time. In the same vein, you need to use the skills and knowledge you acquire regularly, or else they’ll fade away. Sure, there’s some information you don’t need to memorize because you can always look it up, but there are certain skills you really do need to keep sharp in order to critically scrutinize AI output, now and in the future.
When you’re a student, you often want to get through your readings and assignments as quickly as possible. That simple fact lays the groundwork for what I call High AI Dependency — a situation where you use AI to do any and all of the cognitive work that you really ought to be doing yourself. Once you start down this path, you’re setting yourself up to become over-reliant on AI, without having built the underlying skills to use the AI appropriately.
And don’t think this couldn’t happen to you. There’s already research showing that in several real-world cases involving AI assistance (for example, the COMPAS recidivism algorithm used in criminal justice), the human decision-makers ended up over-relying on and over-trusting the AI’s outputs.
Anthropic’s recent study of how university students in the U.S. are using its Claude AI model should make everyone sit up and pay attention. It appears that a very high percentage of students are outsourcing higher-order cognitive tasks to AI. (See the chart below.)
And the situation doesn’t look any better when you examine how students interact with AI. The study’s results suggest that a pretty large fraction of students are essentially handing over all the mental work to the AI. (See another chart below.)
Given what AI can already do well, I think students should be spending more time building their own higher-order thinking skills — since those skills are critical to developing the kind of hunch I described in Advice 1. Unfortunately, Anthropic’s data suggests that it’s exactly those skills that students are outsourcing the most to AI. Don’t be one of these students; it’s a high-risk move that could easily lead to overdependence and eventually make you irrelevant.
*Important side note: Anthropic did mention several caveats to their study, and similarly there are plenty of caveats to my use of their findings here. Still, the results align with what I’ve observed and heard anecdotally about how students are using AI.
In the interest of balance, I should also mention that the Financial Times recently reported more reassuring results from a study of how UK undergraduates are using AI. (See below).
However, the researchers of that study didn’t specify whether those students were using AI in a collaborative (assistant-like) way or in a more direct, do-it-for-me way. And regardless, the trend they reported is still pretty concerning. Their data showed that between November 2023 and November 2024, there was a big jump in the number of UK undergrads using AI for all sorts of tasks. If this trend continues, High AI Dependency may soon become a widespread reality for a lot of those students.
Advice 3: Use AI as an “always on” advisor that you can consult
I truly believe AI can be extremely useful to an undergrad if you treat it like a relatively good professor who knows a bit about a lot of subjects and is always in the mood to help. Let’s call this imaginary person Professor T. Professor T isn’t some legendary genius, but she’s quite good at what she does. She’s also very well-read and always eager to help.
Now, if you had access to such a professor in real life, how would you interact with them (assuming you’re being reasonable)? You’d probably go to them with all kinds of questions and requests, but you’d also stay somewhat skeptical about their answers (after all, they’re not all-knowing, and nobody is perfect). And there are definitely certain requests you just wouldn’t make of them — because c’mon, who would ask a professor to do that?!
For example, it’s perfectly sensible to ask Professor T for general direction on a topic or to explain something you haven’t understood properly. Given that Professor T is unusually kind and helpful, you might even ask her to review your work for substantive or stylistic weaknesses and to advise you on how to fix those issues. But you wouldn’t ask Professor T to write an entire essay for you, or to read a journal article and summarize it for you. So, you should treat AI the same way.
If you’re trying to be a good learner, there’s often far more guidance you’d like than your university’s staff have time to provide. AI can, in a significant way, help make up for that gap. One of the most fundamental ways OpenAI’s Deep Research model has transformed my own research life is by acting as a smart sparring partner that I can bounce all my doubts off of. Now I can immediately ask the AI for its perspective on issues that used to preoccupy me for ages (for example, “Is this really the best phrase to use here? Will the rest of the field easily recognize it?”). Getting instant feedback like that has given me the confidence to move forward with my work much more quickly.
I’ll add that this point is especially important for students in the Global South, where high-quality professor guidance is often in very short supply. Having gone to school and worked with many students from the Global South, I know how hard it is to find after-class help from lecturers and professors. I genuinely think cutting-edge AI could improve that situation by at least 30–40%. So, dear struggling Global South student, make the most of AI to improve your skills and the quality of your output.
In Conclusion
AI is going to be like mobile phones for undergrads — an everyday tool that can help but can also break. Not so long ago, students got by just fine without smartphones, but today a student without one could be at a real disadvantage. At the same time, there’s now fairly persuasive evidence that smartphones have stunted university students’ ability to learn. Pretty soon, not using AI in your coursework might feel as impractical as trying to do research without the internet. Ignoring or avoiding it isn’t really an option. Instead, you have to learn how to integrate it into your academic life in a smart way. Used wisely, AI can hugely enhance your learning and productivity but used carelessly, it can become a crutch that damages your growth. The bottom line: Embrace AI as the powerful assistant it is, but make sure you’re still doing the learning, thinking, and skill-building that you will need to be competitive. If you strike that balance, you’ll be just fine (and probably way ahead of the curve) in this new era of higher education.