Embrace It, Don't Shame It: Using AI to Enhance Student Learning and Problem Solving
Universities panicked when ChatGPT arrived. Bans, detection tools, fear of cheating. But the real risk was never the tools — it was failing to teach students how to use them well. Here's how we went from cautious observers to building an entire course around AI-assisted learning.

When ChatGPT launched in late 2022, universities around the world had essentially the same reaction: panic.
Within weeks, institutions were issuing emergency guidance. Some banned AI tools outright. Others rushed to adopt detection software — Turnitin added an “AI writing detector,” GPTZero appeared overnight, and suddenly every assignment submission was suspect. The fear was visceral and widespread: students would cheat on an industrial scale, critical thinking would collapse, and the entire foundation of academic assessment would crumble.
We understood the concern. We shared some of it. As researchers who’d spent decades in research and teaching, we could see how a tool that generates fluent text on demand could be misused. And of course some students would take shortcuts — that’s been true of every tool from calculators to Wikipedia to Google. The question was never whether AI could be misused. The question was what to do about it.
Most institutions chose restriction. Ban the tools. Detect the cheaters. Return to handwritten exams. Treat AI use as a form of academic dishonesty and build your assessment strategy around catching it.
We went the other direction.
From Caution to Conviction
We didn’t jump straight to embracing AI in the classroom. Like most academics, we spent 2023 watching, experimenting, and thinking carefully about what these tools meant for teaching and research. We used them in our own work — writing, coding, data analysis — and it became obvious very quickly that they were genuinely transformative when used well. Not as answer machines, but as thinking tools. As collaborators that could help us iterate faster, catch blind spots, and get past mechanical bottlenecks to focus on the work that actually mattered. We wrote about this shift in our own workflow in 95% of My Work Happens in VS Code — the same AI-assisted approach we now teach our students.
But we also saw the other side. Students submitting AI-generated essays with no understanding of the content. Colleagues spending more time policing AI use than teaching their subject. Detection tools flagging non-native English speakers as “AI-written” while missing actual AI output. An arms race that nobody could win, and that was making everyone — instructors and students alike — anxious, adversarial, and dishonest.
By 2024, we’d started integrating AI tools more deliberately into our teaching. Not just allowing them, but actively showing students how to use them — how to prompt effectively, how to verify output, how to maintain their own voice and judgement while working with an AI assistant. The results were striking. Students who learned to use these tools well didn’t become lazier thinkers. They became better ones.
Now, in 2026, we’ve built an entire course around this philosophy. The course is called Practical AI for Behavioural Science, and it doesn’t just permit AI use — it requires it. Every student uses ChatGPT, Claude, GitHub Copilot, and Gemini throughout the semester. They submit their complete, unedited chat histories alongside their written assignments — because the point was never to produce AI output. The point was to develop genuine understanding, critical thinking, and problem-solving skills. The AI is how they get there faster.
The Stigma That Remains
Here’s what frustrates us. In research, the tide has turned. Academics across disciplines are increasingly using LLMs for literature review, data analysis, writing, and coding. Funding bodies are starting to acknowledge AI-assisted workflows. Journals are developing disclosure frameworks. The conversation has moved from “should we use these tools?” to “how should we use them responsibly?”
But in teaching, the stigma persists. Many institutions still treat AI use in student work as something to be prevented, detected, and punished. Even where policies have softened from outright bans to “permitted with disclosure,” the underlying message is often the same: AI use is suspicious. It’s a shortcut. It’s probably cheating, even if we can’t prove it.
This needs to change. Not because AI tools are perfect — they’re not. Not because misuse doesn’t happen — it does. But because the cost of treating these tools as threats is far greater than the cost of teaching students to use them well. Every semester spent banning AI is a semester where students don’t learn the skills they’ll need in every job they’ll ever have. Every hour spent on detection is an hour not spent on pedagogy.
The title of this post isn’t clever branding. It’s a genuine plea. Embrace these tools. Teach students to use them. Stop shaming them for doing what every working professional is already doing.
AI as a Thinking Partner, Not an Answer Machine
The biggest misconception driving the fear of AI in education is that it gives students the answers. It can — if you let it. But that’s not a problem with the tool. It’s a problem with how students are taught to use it.
When a student types “write me an essay about cognitive dissonance” into ChatGPT, they learn nothing. When a student types “I’m arguing that cognitive dissonance theory underestimates the role of social context — what are the three strongest counterarguments to my position, and which papers support them?” — they’re doing real intellectual work. They’re stress-testing their own thinking. They’re using the AI as a sparring partner, not a ghostwriter.
This is the shift that matters. The AI isn’t doing the thinking for them — it’s creating an environment where they think more, and better, than they would have on their own. A student working with an AI assistant asks more questions, considers more alternatives, gets unstuck faster, and spends their time on the hard parts — interpretation, evaluation, judgement — instead of getting bogged down in mechanics.
The key is teaching them how. Left to their own devices, most students default to “give me the answer.” With a framework and practice, they learn to use AI the way a good researcher uses a knowledgeable colleague: to pressure-test ideas, catch blind spots, generate alternatives, and iterate toward something better than either could produce alone. We’ve written more about this in our prompt engineering guide — the techniques apply equally to students and researchers.
Like any tool, AI can be used well or poorly. A calculator didn’t destroy mathematical thinking — it freed students to tackle harder problems. AI tools are the same, but the stakes are higher and the possibilities are broader. The institutions that figure this out first will produce the most capable graduates.
Critical Thinking Gets Stronger, Not Weaker
The original fear was that AI would erode critical thinking. We’ve seen the opposite.
When students are required to verify AI output — check whether the citations actually exist, confirm the statistics make sense, evaluate whether the reasoning holds up — they develop verification habits they never had before. Pre-AI, a student could copy a claim from a textbook and never question it. Now, because they know the AI might be wrong, they check. They learn to ask: Is this actually true? Where’s the evidence? Does this make sense given what I know about the domain?
This is the verification mindset, and it transfers far beyond AI interactions. Students who learn to critically evaluate LLM output become better at critically evaluating all sources — papers, textbooks, news articles, their own assumptions. The irony is that AI’s imperfections make it a better teaching tool than a textbook in some ways: it forces students to think critically because they can’t trust it blindly.
The framework we use to structure this is the LLM Problem-Solving Loop — two nested loops that keep the human in the driver’s seat.
The outer loop is the thinking process you’d follow regardless of whether AI existed:
- Plan — What are you trying to achieve? What’s the research question? What does a good answer look like? Define your objectives and required outputs before touching any tool.
- Execute — Do the work. This is where the inner loop comes in — the AI-assisted part of the process.
- Evaluate — Does the result actually answer your question? Is it correct? Does it make domain sense? Apply your own knowledge and judgement.
- Document — What did you do, what worked, what did you learn? Record your methods and reasoning — the same discipline you’d apply to any research process.
The inner loop is how you work with the AI:
- Engineer — Give it context: your data structure, your goals, your constraints, what you’ve already tried, and what went wrong last time. The more specific the input, the more useful the output. This is prompt engineering and context engineering in practice — and it’s really just the skill of articulating your problem clearly enough that someone (or something) else can help you solve it. Crucially, ask the AI for a plan — tell it what you want to achieve and ask it to outline an approach before it generates anything.
- Plan — Review the AI’s proposed approach before any code is written or output is generated. Does the plan make sense? Is it using the right methods, the right libraries, the right steps? This is where your domain knowledge matters most. A few minutes reviewing a plan can save you from going down entirely the wrong path — and it’s a skill that transfers directly to research: evaluating an approach before committing to it. If the plan isn’t right, redirect now.
- Generate — Once you’re satisfied with the plan, ask the AI to execute it. This might mean generating code, writing text, producing a visualisation, or building an analysis pipeline. The key is that generation follows a reviewed plan, not a blind prompt. The difference is enormous — both in the quality of the output and in how much the student learns from the process.
- Verify — Read what comes back critically. Don’t just copy and paste — look at what it’s doing. Run the code. Check the output against what you know. Do the numbers make sense? Do the citations exist? Does the logic hold up? This is where critical thinking lives.
- Refine — If it’s not right, figure out what went wrong and at which level. Sometimes the output is wrong because the plan was wrong — go back to Plan. Sometimes the plan was fine but the AI made an implementation mistake — go back to Generate with a correction. Each refinement is a learning opportunity — you’re developing your understanding of the problem as you iterate.
The inner loop runs two to five times per task. That’s not failure — that’s the process. Teaching students that iteration is normal, and that refining a prompt based on a bad result is a skill, is one of the most important things you can do.
The critical rule: Never use LLM output without verification. You are the researcher. The AI is a tool.
Here’s the deeper point. Large language models were trained on the accumulated output of human intelligence — billions of pages of text, code, research, and reasoning. What they produce is, by definition, output. But learning doesn’t happen in the output. Learning happens in the process — in the crafting of context, the evaluation of a plan, the verification of results, the decision about what to try next. The loop is designed so that every step of the process requires the student to think: to articulate what they want, to judge whether an approach makes sense, to check whether the result is correct, to decide how to improve it. The AI generates outputs. The student owns the process. And it’s the process — not the output — that builds understanding, develops critical thinking, and teaches genuine problem-solving skills.
Learning to Ask Better Questions
One of the most underappreciated effects of working with AI is that it forces students to articulate what they actually want. A vague prompt gets a vague answer. To get something useful, you have to be specific about your question, your context, your constraints, and your criteria for a good response.
This is prompt engineering — and it’s really just structured thinking with a feedback loop. When a student learns to write a good prompt, they’re learning to:
- Define their problem precisely
- Identify what information is relevant and what isn’t
- State their assumptions explicitly
- Specify what “good” looks like before they start
These are exactly the skills we try to teach in research methods courses, seminar discussions, and thesis supervision — except now there’s an immediate, tangible feedback loop. Write a bad prompt, get a bad result, figure out why it was bad, improve it, see the result improve. The learning cycle is fast and concrete in a way that traditional academic feedback rarely achieves. We explore this idea more in Prompt Engineering Is the Skill Nobody Teaches — the same principles apply whether you’re a student or a senior researcher.
Students also learn to give the AI rich context — their data descriptions, their prior attempts, the specific errors they’re encountering, the domain knowledge that matters. This is context engineering, and it maps directly onto the skill of writing a good methods section, briefing a collaborator, or explaining your work to a supervisor. If you can’t tell the AI what you’re doing and why, you probably don’t understand it well enough yourself. The AI responds to exactly what you give it — it doesn’t know what you’ve been working on, what matters to you, or what “good” looks like in your field. You have to provide that context, and learning to do so is itself a form of deeper engagement with your own work.
Removing Bottlenecks, Not Removing Thinking
In our course, psychology students with no coding background are building machine learning pipelines within weeks. Not because AI writes the code for them — but because AI coding assistants break down the technical barriers that would otherwise make this impossible.
Previously, teaching ML to non-coders meant spending most of the semester on programming fundamentals before you could get to the interesting part — the research questions, the model evaluation, the interpretation. Students spent so much time fighting syntax errors that they never developed intuition for the science.
Now, the AI handles the syntax. Students focus on the questions that actually matter: Is this the right model for this question? Is the data appropriate? What does this result mean? What are the limitations? How would I explain this to someone in my field? The coding assistant gets them past the technical scaffolding and straight to the problem solving and critical thinking that the course is actually about.
This doesn’t mean they don’t learn to code. They do — through exposure, through reading what the AI generates, through modifying it, through debugging it when it doesn’t work. But the coding was never the point. The thinking was the point. The AI let us get to the thinking faster.
This principle applies far beyond coding. In any discipline, AI can remove mechanical bottlenecks — formatting, literature searching, drafting initial structures, generating examples — so students can spend more time on the intellectual work that actually develops expertise. We’ve seen the same effect in our own work: once we moved everything into VS Code with AI assistants, the time we spent on overhead collapsed and the time we spent on actual thinking expanded. The same shift applies to students. The question isn’t “can students do this without AI?” It’s “what can students learn to think about when the mechanical overhead is reduced?”
The institutions that are still focused on preventing AI use are, whether they realise it or not, choosing to keep those bottlenecks in place. They’re protecting the overhead, not the learning.
Transparency Over Surveillance
The stigma around AI use in education is often reinforced by how institutions frame it: as something to be monitored, detected, and controlled. Even well-intentioned policies carry an undercurrent of suspicion. “You may use AI, but…” — and the “but” is always about limits, not about learning.
We take the opposite approach. If we want students to be honest about their AI use, we have to go first. The course materials themselves were designed and developed with Claude, ChatGPT, GitHub Copilot, and Gemini, and this is stated openly. We use these tools for virtually all aspects of our work — research, writing, coding, data analysis, course development — and we tell our students that. We even code all our lecture slides in HTML with AI assistants rather than using PowerPoint. Requiring students to disclose their AI use while pretending we don’t use the same tools is hypocritical. Students see through it immediately.
In our course, AI disclosure isn’t a confession — it’s a professional practice. Students specify which tools they used, what tasks the AI performed, what they verified and how, and what they contributed beyond what the AI generated. This is the same kind of transparency we expect in research methodology sections. It’s good scientific practice, and it normalises honest engagement with these tools rather than driving it underground.
For the written assignment, students submit their complete, unedited chat histories alongside their work. Not as a surveillance mechanism — but because the process is part of the assessment. Forty percent of the rubric grades the quality of the AI interaction: how well they prompted, whether they iterated, whether they pushed back when the AI was wrong, whether they verified claims. A student who copies and pastes LLM output with no thought demonstrates no skill. A student who engages critically, iterates thoughtfully, and produces something genuinely theirs — with AI assistance visible throughout — demonstrates exactly the skills the course aims to develop.
A Policy Isn’t a Pedagogy
Many institutions are responding to AI by writing policies: “You may use AI tools, but the work must be your own.” This sounds reasonable. In practice, it’s almost useless.
Students have no framework for what “the work must be your own” means when an AI helped produce it. How much editing makes it “yours”? Is using AI for research okay but not for writing? What about using it to check your grammar? The ambiguity creates anxiety, inconsistency, and a lot of secret use that nobody talks about. The stigma isn’t removed — it’s just made vague.
The alternative is to teach AI use as a skill:
- Give students a structured framework (like the LLM Problem-Solving Loop)
- Show them what good AI interaction looks like and what bad AI interaction looks like
- Grade the process, not just the product
- Create opportunities for students to demonstrate genuine understanding
You don’t need to teach a whole course on AI to do this. The framework can be introduced in a single lecture and applied to any discipline. The principle of assessing how students work with AI — not just what they produce — works for essays, lab reports, design projects, case studies, anything.
The choice facing educators isn’t between embracing AI and maintaining standards. It’s between teaching students to use these tools well and pretending they don’t exist. One of those paths produces graduates who can think critically, verify information, and work effectively with AI. The other produces graduates who learned to hide their AI use from detection software.
This Is Just the Beginning
This is the first time we’re running this course in its current form. Semester 1, 2026, started this past week. Everything we’ve described is the design — not the results.
We’re planning two follow-up posts: one mid-semester, once we’ve seen how students actually engage with the framework, and one at the end, with reflections on what worked, what didn’t, and what we’d change. We’ll share how students learn to work with AI and whether the students who engaged most deeply with the LLM Problem-Solving Loop are the ones who performed best overall.
The course repository is open-source on GitHub. We’re releasing materials week by week as the semester progresses — the full set of lectures, labs, assessments, rubrics, and guides will be available by June. If you’re an educator thinking about how to handle AI in your teaching, follow along and take what’s useful.
The stigma around AI in education served a purpose in the early days — it bought institutions time to think. But we’ve had that time now. The tools are here, the students are using them, and the evidence is mounting that teaching AI skills produces better outcomes than banning them. It’s time to stop shaming and start embracing.
The students started this past week. Let’s see how it goes.
Michael Richardson Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University
Rachel W. Kallen Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University
Ayeh Alhasan Dr, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University
AI Disclosure: This article was written with the assistance of AI tools, including Claude. The ideas, opinions, experiences, and course design described are entirely our own — the AI helped with drafting, editing, and structuring the text. We use AI tools extensively and openly in our research, teaching, and writing, and we encourage others to do the same. Using AI well is a skill worth developing, not something to hide or be ashamed of.
It’s also worth acknowledging that the AI models used here — and all current LLMs — were trained on vast quantities of text written by others, largely without explicit consent. The ideas and language of countless researchers, educators, and writers are embedded in every output these models produce. Their collective intellectual labour makes tools like this possible, and that contribution deserves recognition even when it can’t be individually attributed.