In years gone by at the end of each semester, we would open an envelope containing hand-written student evaluations with a mix of expectation and trepidation. Had the students appreciated the interactive learning experiences, the problem-based pedagogies, or the optional assignment topics we had introduced?
Receiving an email with student evaluations, in the form of the Unit of Study Survey (USS) at the University of Sydney, is no less significant years later. That familiar knot in your stomach when the notification arrives, the careful parsing of every comment, the tendency to focus on the one negative response rather than ten positive ones – these reactions are remarkably consistent across generations of educators and institutions.
As with any professional activity, feedback is integral to maturing our practices and continually improving our students’ experiences. Yet if so many of us still approach Student Evaluations of Teaching (SET) with the same anxiety we felt as early career academics, perhaps this tells us something important about how these feedback loops currently work and could work better.
What if student feedback could become a source of professional confidence, recognition and growth rather than stress and defensiveness? Here’s what the evidence tells us about creating student evaluation of teaching systems that truly serve both educators and learners.
The policy landscape: why student evaluation of teaching matters more than ever
The Australian higher education policy context consistently emphasises the centrality of teaching quality for enhanced student learning. The Australian Universities Accord Final Report (2024) sets an ambitious vision: “Australia should aspire to be a world leader in delivering innovative, best practice learning and teaching – not only maintaining but improving higher education student experience and outcomes as the system grows” (p. 149).
This concern with teaching quality was reinforced by the Australian Government Productivity Commission’s report “From learning to growth” (2023), which encourages professionalising the tertiary workforce’s teaching knowledge and systematically assuring teaching quality through feedback measures.
The message from TEQSA is equally direct. The Higher Education Standards Framework states that Universities must ensure that “all students have the opportunity to provide feedback on their educational experiences and student feedback informs institutional monitoring, review and improvement activities.” All teachers must also have opportunities to review feedback on their teaching and be supported in enhancing these activities.
There is no mistaking the federal government’s commitment to teaching quality achieved, in part, through the systematic student evaluation of teaching. For practicing educators, this means student evaluation of teaching remains an imperative and that we can make it work better for everyone involved.
Rather than viewing student evaluation of teaching as an external burden, the policy emphasis on “improvement activities” suggests we should consider it as one tool in our professional development toolkit, focused on driving meaningful change in how we teach and how students learn.
Contested perspectives in the literature
There is a significant body of literature reviewing questions of efficacy, bias, representation, and impact of student evaluation of teaching surveys. The literature reveals what is, frankly, a bit of a battlefield between competing perspectives on student feedback systems.
The research highlights several key tensions:
- The challenges are well-documented. Studies consistently show concerns about inappropriate survey designs, low response rates, and the care required in interpreting student feedback (Graf, 2024; Uttl, 2024). Many studies report that student evaluation of teaching responses can be influenced by factors unrelated to teaching quality – from gender, age and bias affecting teachers from minority groups (Gelber et al., 2022; Heffernan, 2022; Heffernan, 2023; Hutchinson et al., 2024) to the time-of-day classes are held, though some studies have reported no significant patterns (O’Donovan, 2024).
- But students can provide valuable insights. The literature also demonstrates that valid survey questions can be asked of students when there is clarity about the purposes of student evaluation of teaching and survey questions align with those purposes (Boring et al., 2016; Crimmins et al., 2023; Stein et al., 2020). Students can articulate what matters to them when they understand why their feedback matters and see evidence that their input leads to change (Bradley et al., 2015; Sullivan et al., 2023).
- The key insight from research: Student evaluation of teaching works best as part of a broader feedback ecosystem – there should be a range of sources of student feedback valued, and student evaluation of teaching surveys should not be the single source of evidence (Ashwin, 2020; Crimmins et al., 2023). Think of it as one voice in a conversation about your teaching, alongside peer review for teaching, self-reflection, and student outcomes.
Rather than dismissing student evaluation of teaching due to its limitations or accepting it uncritically, the research suggests we can thoughtfully design and use this data. The goal isn’t perfect evaluation – it’s actionable insight that helps us grow as educators together with recognising and celebrating outstanding practice.
What other Australian universities are doing
A review of several Australian universities (Adelaide, Deakin, Queensland, Monash and UNSW) indicates that while all acknowledge the limitations of institutional student surveys, each university mandates student feedback on courses/units and teaching each time they’re offered, though there may be exemptions to survey release.
What these institutions have in common:
- Short, focused surveys combining quantitative ratings with targeted qualitative questions.
- Screening for constructive feedback. Qualitative responses are filtered to remove inappropriate comments before reports are shared with educators – not to hide criticism, but to ensure feedback serves a constructive purpose.
- Relatively wide release of results. Reports are shared broadly within institutions, though with appropriate recognition of confidentiality.
Even within institutional constraints, there are opportunities to influence how student evaluation of teaching works in practice through how we introduce surveys to students, explain their purpose, and respond to feedback – opportunities which lead to more meaningful student engagement and more useful feedback for teaching improvement.
Designing student evaluation of teaching that works: evidence-based principles
Taking into account the research literature, comparable universities’ practices, and importantly, a comprehensive review by Crimmins et al. (2023) through the Council of Australasian University Leaders in Learning and Teaching, the evidence shows that purpose and design matter. Effective student feedback begins with clarity of purpose – surveys work best when questions (i) align with clear learning goals and (ii) sit within a broader suite of feedback options. Research demonstrates that instruments (i.e., the survey) should be relatively short, meaningful and engaging, combining both quantitative and qualitative questions.
These evidence-based principles are helping to guide the University of Sydney’s rethinking of student feedback systems. Your role is in the implementation. While survey design may be largely determined institutionally, there’s a significant opportunity to influence how student evaluation of teaching works in practice:
Making student evaluation of teaching work at Sydney
The University of Sydney’s students are articulate, engaged, and eager to contribute to teaching excellence. Our challenge is creating systems that harness their insights effectively – systems that respect both student voices and educator expertise while driving genuine improvement in learning experiences.
The University’s student evaluation of teaching refresh is guided by principles of being evidence-based, aligned with our values, collaborative in design, respectful of all participants, and focused on actionable outcomes. When student feedback works well, it creates a positive cycle: students feel heard and engaged, educators receive useful insights for improvement, and teaching quality genuinely improves and is celebrated.
Have your say, get involved
Over the coming months, the University will be undertaking two interrelated projects: (i) piloting a refreshed student evaluation of teaching survey and (ii) implementing a new survey platform.
Have your say by emailing [email protected] with your thoughts on:
- The questions you would value in a new survey instrument and
- The outcomes you would value from a refreshed SET protocol
Get involved:
- The University has invested in a new survey platform Explorance Blue. View the Blue demonstration here (click “Get Your Demo”) to see what’s coming in 2026.
Other options for feedback – Peer Review for Teaching
Looking for additional evidence of your teaching effectiveness beyond student surveys? Peer Review for Teaching offers expert colleague feedback on your teaching.
Read more and sign up:
- ‘Peer Review for Teaching for Participants 2025’ flyer: https://bit.ly/2025-PRT-1
- Receive personalised expert review and support for your teaching