How are medical students actually using AI in their research projects?

Adobe Stock, used under licence

Artificial intelligence (AI) tools are now widely used by students across higher education. The pace of development has made it challenging for both students and educators to keep up with how these tools are actually being used in learning and assessment.

In this post, medical educators share what we have observed in the University of Sydney Doctor of Medicine (MD) program after introducing a structured process for students to declare their use of generative AI tools in their research projects. Understanding how students are actually using these tools is a critical first step for educators looking to guide AI use effectively and responsibly in research training.

The MD research project

All students in the University of Sydney MD program complete a mandatory individual research project. The project runs over a 14-week full-time block and is designed to develop students’ research literacy and evidence-based practice skills.

Students work with an academic supervisor to complete a research project. These projects vary widely in design and may include clinical audits, systematic reviews, laboratory studies, educational research, or small clinical or public health studies. Students are required to produce a written research report and present their work in an oral presentation. While these projects are not higher degree by research programs, they are comparable in scope to Honours-level research projects and capstone research experiences embedded in other health professional degrees.

Allowing AI use — with transparency

From 2024, the MD program allowed students to use generative AI tools in their research projects, provided that two conditions were met:

First, the user had to comply with the University’s AI guidance and academic integrity policies. In practical terms, this means students must remain responsible for the accuracy and originality of their work, must not submit AI-generated content as their own without acknowledgement, and must ensure that AI tools are not used in ways that breach confidentiality, privacy, or research ethics.

Second, students were required to declare each use of AI in a mandatory online reporting form. Students logged the tool they used and briefly described what they used it for during their project.

This approach enabled AI use while also providing transparency into how students engaged with these technologies.

How many students reported using AI?

Across two consecutive cohorts, we observed a rapid increase in reported AI use.

In 2024, 145 of 255 students (57%) reported using at least one generative AI tool during their research project.

In 2025, this increased to 217 of 275 students (79%).

Students also reported multiple instances of use during their project. Among those who used AI tools, the number of reported uses ranged from:

  • 1–27 uses per student in 2024

  • 1–35 uses per student in 2025

These reports capture individual instances where a student used an AI tool for a specific task (for example, summarising a paper, checking statistical approaches, or editing text). Because the data rely on self-reported declarations, actual use is likely to be higher.

What AI tools were students using?

In both years, most reported AI use involved large language models (LLMs).

LLM use accounted for:

  • 84% of reported tools in 2024

  • 90% of reported tools in 2025

Students most commonly reported using:

By 2025, we also observed increasing diversity in tools used, including:

Students also reported using AI-enabled tools across several research tasks:

Literature search and referencing

Writing and editing

Visualisation and design

Transcription and translation

What were students using AI for?

To better understand how students used AI, we developed a simple framework that maps AI use across the phases of the research project and two broad types of interaction with AI tools: 1) asking the tool to do a task or 2) using it to help understand something.

Examples of how students reported using AI during their research projects:

Research project phase AI used to DO tasks (task execution/automation) AI used to HELP understanding (information and guidance)
Project conception Research background literature; refine research questions
Project planning Generate timelines; organise focus groups Explore ethical considerations
Literature review Translate papers; summarise articles; generate reference lists Identify search terms; locate papers; learn reference management tools
Methodology and data collection Explore research methods and statistical approaches
Data analysis Build tables; perform calculations; transcribe interviews; assist coding Understand statistical methods; learn software (e.g. Excel, statistical packages)
Writing and presentation Edit text; generate figures or images Generate outlines; check clarity and structure

Why does this matter?

These findings are based on mandatory self-reported declarations and, therefore, likely underestimate actual use. Even so, they provide a useful snapshot of how medical students are integrating AI tools into research training.

Two patterns stand out:

  1. AI adoption among students is rapid and widespread. Within a single year, we saw both a large increase in the proportion of students using AI and a growing diversity of tools being used.
  2. Students appear to use AI tools not only to generate outputs but also as informal learning assistants, particularly when trying to understand research methods, statistics, or unfamiliar software.

This aligns with emerging literature suggesting that students often use generative AI as a form of on-demand academic support or “cognitive partner” when navigating complex learning tasks, and that adoption of these tools among university students has been rapid and widespread  (Baek et al. 2024; Boscardin et al. 2024). Adoption of AI tools by researchers is also widespread (Salman et al. 2025).

For educators, this raises important questions. If students are already integrating AI into research workflows, ignoring this behaviour may increase risks around inappropriate use, research integrity, and misunderstanding of methods. Structured guidance and transparency processes are imperative.

What we have learned so far

Several insights emerged from tracking student AI use in this way.

  • AI use in student research is already widespread and increasing rapidly.

  • Large language models dominate, but students also use a growing ecosystem of specialised AI tools for literature searching, writing support, and visualisation.

  • Students often choose tools independently, including older or externally sourced tools rather than university-supported platforms.

  • Many uses involve guidance rather than automation, suggesting students are using AI to help them understand research processes.

What this could mean for your teaching

Although this project focused on medical students, the patterns are likely relevant across many disciplines that involve student research or project work.

Some practical approaches educators might consider include:

  • Introduce AI transparency processes: Simple reporting mechanisms, such as short AI use declarations, can provide valuable insights into how students are using these tools while encouraging responsible use.
  • Teach students how to use AI critically: Many students used AI to understand research methods or statistics. This suggests a need to teach students how to evaluate AI explanations and verify outputs rather than accepting them uncritically.
  • Provide explicit guidance on ethical and responsible use: Research projects often involve sensitive data, human participants, or ethical approval processes. Clear guidance on where AI tools can and cannot be used is essential.
  • Recognise AI as part of the research workflow: Rather than treating AI use purely as a risk, educators may wish to explicitly address how AI tools fit into research processes such as literature exploration, data organisation, and drafting.

Acknowledgements

This work was conducted by the MD research education team, including Kellie Charles, Rajneesh Kaur, Richmond Jeremey, Rebecca Reynolds and Sally Middleton. The project was approved by the University of Sydney Human Research Ethics Committee (HREC #2023-373).

Written By
More from Joanne Hart

It’s no mean feat! Assessing medical student research projects at scale

The University of Sydney Doctor of Medicine (MD) Program includes a mandatory...
Read More