Adapting HDR supervision practice at the speed of trust with generative AI

Adobe Stock, used under licence

Higher Degree by Research (HDR) journeys are often framed as a sort of personalised apprenticeship in disciplined inquiry: sifting texts manually, wrestling with raw data, drafting–and redrafting at 2am–arguments until the logic clicks. Or until Reviewer 2 settles.

Since late 2022, generative AI has begun to reshape that apprenticeship.

In conversations across Film, Media, Social Work, Engineering and Education at the University of Sydney, supervisors and candidates perceive two co‑existing worlds:

  • Pre‑ChatGPT cohorts – who honed their craft through slow transcription, manual coding, and longform writing, developing a visceral sense of data provenance and scholarly voice.
  • Post‑ChatGPT cohorts – who may see AI tools as default partners or even digital mentors for brainstorming, code generation, language refinement, and even mental health support.

The resulting clash of research practices is birthing new conversations about trust, boundaries, meta-learning and generational split in supervision and in research practice itself. Rather than casting AI as saviour or saboteur, many supervisors and candidates approach it as a tool whose advantages arrive twinned with new obligations. Each impressive shortcut can bring an equally pressing need for verification, attribution, a rebalance for equity, and meaning making.

How, then, do we set every gain beside a companion discipline that keeps research both personally meaningful and futuristically authentic?

Two sides of a coin: efficiency opens door for meta-learning

When AI may shave minutes off a task, the time gained is not a licence to race ahead.

Each task noted below opens a door for the paired supervisory strategies to push candidates through it into deep meta‑learning. Easier said than done, so we present each as two faces of the same coin that when balanced, each AI boost has a counter‑move that pauses just long enough to keep scholarship reflective and inclusive.

  • Apprenticeship accelerator ⇄ Conceptual depth keeper: A dummy dataset dropped into ChatGPT returns a runnable Python/MATLAB script in seconds. The supervisor slows the moment, walking line‑by‑line with the student: What assumption hides here? Which alternative test could we run? What if we change this part of the script? Speed translates into a deeper statistics lesson, preserving disciplinary rigour while embracing efficiency.
  • Cognitive sparring partner ⇄ Provenance guardian: Candidates outline materials, methods and constraints for the LLM to promptly propose experimental angles or writing scaffolds, or developing self-management skills in the process. Some candidates are using AI-generated podcasts with tools like NotebookLM’s podcast feature to digest complex literature surveys, turning dense academic topics into conversational overviews before deep-dives. But every prompt also creates an invisible (and perhaps unreliable) co‑author. Supervisors facilitate logging prompts and decisions in a shared “AI appendix” to help keep the ideation playground open and leave a clear audit trail of intellectual originality.
  • Language polisher ⇄ Authorial‑voice cultivator: Starting with a blank page is hard. We have all seen how quickly LLMs articulate scattered ideas, structure topics, reframe paragraph, summarise readings, which can free candidates for higher‑order argumentation, especially in complex projects. Yet, when AI provides the first words, it can also narrow the field of possibilities, subtly shaping the candidate’s conceptual boundaries before their own ideas have taken form. To counter this, supervisors can run a routine “writing re-calibration” during progress catch-ups, reviewing key paragraphs with candidates to compare the AI draft and the candidate’s revision. The exercise helps students discern a voice that aligns with their scholarly identity (and their participants’) while developing attribution practices and ensuring academic integrity alongside language fluency.
  • Efficiency leveller ⇄ Equity monitor: Digitally fluent candidates often glide through foundational tasks, while peers still finding their footing with generative tools risk feeling behind. In some respects, we’re also seeing this gap close quickly as AI capabilities improve and familiarity grows. Supervisors can accelerate this by setting up reverse‑mentoring pairs: an “AI‑heavy” and an “AI‑light” candidate swap strategies, share dilemmas, and reflect openly. Often, the initially reluctant student becomes the most enthusiastic advocate once they experience genuine acceleration in their work. The exchange maintains equity while normalising tool use.

Charting new territories together at the “speed of trust”

Supervision literature has long framed the relationship as dialogic and negotiated; AI pushes to sharpen those principles. Echoing the National AI Centre’s reminder that progress must move at the “speed of trust,” (TEQSA’s June webinar “Education to industry: how gen AI is shaping tomorrow”), we are seeing several HDR mentor-mentee teams now:

  • Open with provenance: As early as possible, jointly map where AI may enter the workflow and how each instance may be logged, carefully guided by reliable and relevant resources. Recent frameworks for AI-integrated assessment offer practical approaches where students submit both their work and annotated AI dialogues, making the invisible process of AI-assisted learning visible and assessable.
  • Surface power gaps: Recognise who feels fluent and who feels wary; organise reverse‑mentoring pairs so confidence travels both ways.
  • Protect data dignity: Use placeholder or synthetic datasets when testing public models, then swap real data back offline.
  • Keep agreements alive: As guidelines and updates unfold, several recommendations point to living agreements rather than one‑off rules. Revisit the shared “AI appendix” at every milestone, and jointly re-examine shared practice as tools and attitudes keep shifting.
  • Stay field-relevant: With roughly 10% of global R&D investment now flowing into AI development, most research and job fields are seeing AI applications emerge as legitimate research focus areas themselves. Students encounter AI-driven studies at conferences and in high-impact journals, making AI literacy increasingly a disciplinary requirement. Supervisors need to help candidates distinguish between AI as a research accelerator and AI as a research subject, often both simultaneously.

Leading new territories with timeless values

Two terms that seem to dominate recent AI talk are prompt engineering and AI critique. They sound novel, yet both extend values that scholarship has always prized: asking precise questions and reading answers with healthy scepticism.

Take prompting. We often assume our research framing is crystal-clear, until an audience member, or now an AI agent, interprets it differently. The key lesson isn’t about mastering ‘prompt engineering’ but about recognising the gap between intention and expression: what we think we write versus what others actually receive. AI makes this discrepancy immediately visible, offering a low-stakes rehearsal space for scholarly communication (e.g., framing a research question and briefing a supervisor to presenting findings to non-expert audiences).

AI critique extends another timeless scholarly habit: evidencing claims and recognising one’s own voice in the telling. A candidate may feed a rough paragraph into an LLM for paraphrasing and citation suggestions, then cross‑check references. But that check is only the first layer. Supervisors need to deliberately push further: Where does this paraphrased argument breathe? Whose perspective is foregrounded? Who are we behind these words?

This iterative process of questioning and evaluating AI outputs has been found to help students better grasp what these tools can and cannot do, which can prove invaluable for developing robust research practices. More importantly, students need to have agency in deciding when and how to engage with AI, taking genuine ownership of their research decisions rather than defaulting to automated assistance.

This way, critiquing AI outputs means more than fact‑checking; it requires authors to scrutinise if the language used authentically represents their professional identity (and their participants’), or not, and why. Here we found the generational split is stark. Pre‑ChatGPT researchers, brought up on slow drafting, feel the “grain” of their prose, and resist the weight of ethical dilemmas. Post‑GPT cohorts may view research language as interchangeable, grateful for anything concise and grammatical.

This divergence is precisely why the lone‑wolf model of HDR mentorship is increasingly insufficient. When a literature review can be drafted overnight, or a dataset analysed in minutes, candidates need a communal space to interrogate how the shortcut was taken and what it altered. A few supervisors are beginning to convene with small “AI studios”: mixed cohorts bring their own perspectives, lived experiences, prompts, outputs, and rewrites to a shared round‑table. A pre‑ChatGPT scholar might read a hand‑polished paragraph with colour markers, while a post‑ChatGPT peer performs the LLM version, with their respective priorities in mind. Artefacts from each session become an annotated “AI footprint” that everyone can reuse or challenge.

Final words

Within our respective contexts, we’ve found that the key is not simply to move forward, but to discern what’s worth carrying forward and what we may need to leave behind. AI is not the sole cause–but one contributor–to a broader shift away from siloed practices and towards more communal, dialogic forms of research at the speed of trust, balancing two sides of the coin, with both colleagues and students who “get” us, and those who challenge us to find new meanings.

Special thanks to Susan Potter for sharing her supervision and HDR experiences, some of which are reflected in this article.

More from Ngoc (Ruby) Nguyen

From AI Anxiety to Teaching Confidence: Ways Forward

Your students are already using AI. The question isn’t whether AI belongs...
Read More