{"id":25896,"date":"2026-04-29T11:16:27","date_gmt":"2026-04-29T01:16:27","guid":{"rendered":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/?p=25896"},"modified":"2026-04-29T12:07:04","modified_gmt":"2026-04-29T02:07:04","slug":"why-feedback-matters-and-matters-more-for-open-assessments","status":"publish","type":"post","link":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/why-feedback-matters-and-matters-more-for-open-assessments\/","title":{"rendered":"Why feedback matters (and matters more) for open assessments"},"content":{"rendered":"<p>The ability and availability of generative AI tools &#8212; which can complete many of our assessments, including quizzes, assignments, reports, reflections and presentations, potentially with minimal student thinking or understanding &#8212; means that these assessments can no longer be treated as reliable <em><strong>evidence<\/strong> <\/em>of students\u2019 capabilities when completed in unsupervised environments. Completion of these tasks and products are, though, still valuable as ways to <em><strong>develop<\/strong> <\/em>students\u2019 thinking and capabilities, especially when supported by the use of contemporary technologies as they are used in the discipline. For this to happen, students also need feedback to guide them and help them improve. Increasingly, this will include feedback on the quality of their judgements, choices, and thoughtful use of tools such as generative AI alongside traditional academic skills.<\/p>\n<p>The distinction between <a href=\"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/the-sydney-assessment-framework\/\" target=\"_blank\" rel=\"noopener\">secure assessments (Lane 1)<\/a>, which provide trustworthy assurance <strong><em>of<\/em> <\/strong>learning, and <a href=\"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/the-sydney-assessment-framework\/\" target=\"_blank\" rel=\"noopener\">open assessments (Lane 2)<\/a>, which are designed primarily <strong><em>for<\/em> <\/strong>learning, is a clarification of purpose and ensures we can maintain and improve standards. <strong>As a result, this categorisation fundamentally reshapes how we should approach and think about marking and its purpose.<\/strong> The shift does not ask markers of open assessment to make weaker judgements, but different ones, because the assurance function has been deliberately relocated to secure assessments elsewhere in the program.<\/p>\n<p>The <a href=\"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/the-sydney-assessment-framework\/\" target=\"_blank\" rel=\"noopener\">two lane approach<\/a> is not an interim response to AI. It recognises that we can only reliably control assessment conditions in supervised environments, and that outside those settings we must design for learning rather than the policing of how AI has been used.<\/p>\n<h2>The assessment product is no longer a reliable proxy<\/h2>\n<p>This article builds on a recent <em>Needed Now in Learning and Teaching<\/em> post, <a href=\"https:\/\/needednowlt.substack.com\/p\/what-are-we-going-to-do-about-marking\" target=\"_blank\" rel=\"noopener\">What are we going to do about marking now?<\/a>, which included the statement <em>\u201cthe artefact is no longer a reliable witness\u201d<\/em>. The vast majority (~75%) of our assessments at Sydney are categorised as open, as they should be. Just under half (~48%) of all marks are given to assessments in the open \u201cproduction and creation\u201d category \u2013 written work, presentations, portfolio, journals, creative work and performance. <strong>It is pleasing and clear that we value these open assessment types.<\/strong> They also play an important role in developing many of the skills valued by employers, including creativity, critical thinking, and reflection. However, those employers are increasingly expecting our graduates to work effectively with AI tools to generate options, interpret and critique perspectives, iterate purpose and content, and produce polished content quickly. To help our students develop these capabilities, we need to concentrate on the processes they use, including the choice and ways they use tools, not just on the final product.<\/p>\n<p>We\u2019ve long known that we never assess learning directly. We make professional judgements sometimes on direct observation but often based on proxies, drawing on available evidence of understanding and quality (<a href=\"https:\/\/doi.org\/10.1080\/0969594X.2012.714742\" target=\"_blank\" rel=\"noopener\">Sadler, 2013<\/a>). As Boud and colleagues note (<a href=\"https:\/\/www.routledge.com\/Developing-Evaluative-Judgement-in-Higher-Education-Assessment-for-Knowing-and-Producing-Quality-Work\/Boud-Ajjawi-Dawson-Tai\/p\/book\/9781138089358\" target=\"_blank\" rel=\"noopener\">2018<\/a>), we often rely on contextual signals that <em>stand in<\/em> for learning rather than observing learning itself. What AI has done is expose how fragile those inferences can be when persuasive features such as fluency, structure, and disciplinary language are easy to generate on demand without the student necessarily exercising disciplinary judgement. The \u201c4P framework\u201d (<a href=\"https:\/\/doi.org\/10.1080\/02602938.2026.2620053\" target=\"_blank\" rel=\"noopener\">Fawns et al. 2026<\/a>) focusses on product, process, performance, and practice in the design of balanced assessment.<\/p>\n<p>The risk is turning marking into an exercise in AI detection and suspicion. The two lane model deliberately avoids this trap. As AI tools are very good at producing polished content, marking needs to focus on how well ideas are connected and claims are justified, rather than on surface features of production. In open assessments, markers are not being asked to reconstruct how learning occurred. Their task is to judge the quality of reasoning, decisions, and judgement evident in the work itself and, importantly, to help students see how to improve. It is not their task to infer that a student has or has not learned based on how fluent, generic, or \u2018AI\u2011sounding\u2019 a submission appears.<\/p>\n<h2>Open assessments as assessment for learning<\/h2>\n<p>Under the Sydney Assessment Framework, open (lane 2) assessments are not designed to independently assure learning outcomes. That responsibility sits with secure (lane 1) assessments positioned at key points across the program. This frees us to concentrate completely on learning and development when we design and mark open assessments. These exist to:<\/p>\n<ul>\n<li>surface students\u2019 developing understanding,<\/li>\n<li>normalise the use of tools (including AI) as part of learning and disciplinary practice,<\/li>\n<li>provide students with actionable feedback on their reasoning, judgement, and use of tools,<\/li>\n<li>motivate engagement with disciplinary concepts, skills, and dispositions<\/li>\n<li>signal where work is important by being graded and counting towards the final mark, and<\/li>\n<li>prepare students for future secure assessments and life after the university.<\/li>\n<\/ul>\n<p>Once this purpose is clear, marking shifts to thinking about how marks and feedback help students develop a more accurate internal sense of quality, their learning progress, and the gaps in their capabilities. This shift is critical. <strong>It reframes setting, marking and providing feedback on open assessments as a developmental act.<\/strong><\/p>\n<h2>The validity of open assessment lies in the feedback we give<\/h2>\n<p>Feedback is not an incidental secondary feature of open assessments. It is the primary mechanism through which they work as assessments for learning. The much cited work by Sadler (<a href=\"https:\/\/doi.org\/10.1007\/BF00117714\" target=\"_blank\" rel=\"noopener\">1989<\/a>, <a href=\"https:\/\/doi.org\/10.1080\/02602930903541015\" target=\"_blank\" rel=\"noopener\">2010<\/a>) shows that feedback supports improvement only when students can:<\/p>\n<ul>\n<li>recognise what quality looks like,<\/li>\n<li>compare their work against that standard, and<\/li>\n<li>take action to close the gap.<\/li>\n<\/ul>\n<p>In the context of generative AI, this includes learning to judge when, how, and to what extent AI use meet disciplinary expectations. Open assessments create the conditions for this iterative cycle. Feedback arrives early enough to be used rather than after the learning opportunity has passed. <strong>Only secure assessments carry the burden of assurance, meaning that open assessments can function as spaces where feedback has time to do its work.<\/strong><\/p>\n<p><span data-teams=\"true\">Sometimes, this will feel like \u201cmarking the AI&#8221; and this may be understandably uncomfortable (or even depressing). It is important to mark the quality of the work received against the disciplinary standards &#8211; and not to try to detect or judge its provenance. We cannot reliably know whether a student has used AI verbatim, used AI for help, completed the work without AI, or anything in between. If the student has just uncritically pasted straight from their AI tool and it is poor quality, then the feedback can help them learn what the disciplinary standards are so that they can develop an eye for quality. If the work is completely their own or even a result of an attempt to use AI effectively, the feedback can similarly help them improve. The focus and feedback remains on the quality of reasoning and judgement visible in the submission, not on the provenance of the text.<\/span><\/p>\n<div class=\"sc-box bg-red content-black opacity-on\"><div class=\"inner\" style=\"padding-top:30px;padding-bottom:30px;\"><h2>When it feels like \"marking the AI\"<\/h2><div class=\"sep\"><\/div><span>This discomfort is understandable. In open assessments, the task is to judge the quality of work against disciplinary standards, not to infer how it was produced. Feedback still helps students learn.<\/span><\/div><\/div>\n<h2>Feedback literacy and learning how to improve<\/h2>\n<p>Open assessments are powerful in helping students develop their capacity to interpret feedback, evaluate quality, and use standards independently \u2014 that is, to develop what <a href=\"https:\/\/doi.org\/10.1080\/02602938.2018.1463354\" target=\"_blank\" rel=\"noopener\">Carless &amp; Boud (2018)<\/a>, <a href=\"https:\/\/doi.org\/10.1080\/02602938.2019.1667955\" target=\"_blank\" rel=\"noopener\">Molloy et al. (2020)<\/a> and others term feedback literacy. Ideally, they:<\/p>\n<ul>\n<li>expose students repeatedly to criteria and standards, including through rubrics,<\/li>\n<li>encourage self\u2011evaluation and comparison to these, and<\/li>\n<li>support students\u2019 capacities to judge quality themselves rather than just complete tasks and chase marks.<\/li>\n<\/ul>\n<p>Importantly, these purposes also include learning to critically engage with AI, rather than treating its use as inherently good or bad. From this perspective, marking in lane 2 is not just about the overall quality of a piece of work, but about changing how students understand and judge quality itself. As they enter workplaces where AI use is increasingly expected, these capacities are central graduate outcomes.<\/p>\n<h2>Evaluative judgement and learning to judge quality yourself<\/h2>\n<p>Tiffin\u2019s recent <em>Needed Now<\/em> article places strong emphasis on evaluative judgement &#8211; the capacity to make informed decisions about the quality of work, one\u2019s own and others\u2019 (<a href=\"https:\/\/www.routledge.com\/Developing-Evaluative-Judgement-in-Higher-Education-Assessment-for-Knowing-and-Producing-Quality-Work\/Boud-Ajjawi-Dawson-Tai\/p\/book\/9781138089358\" target=\"_blank\" rel=\"noopener\">Boud et al., 2018<\/a>; <a href=\"https:\/\/doi.org\/10.1007\/s10734-017-0220-3\" target=\"_blank\" rel=\"noopener\">Tai et al., 2018<\/a>). Evaluative judgement does not emerge automatically. It must be developed deliberately through:<\/p>\n<ul>\n<li>exposure to exemplars,<\/li>\n<li>shared language about standards,<\/li>\n<li>dialogue around criteria, and<\/li>\n<li>high-quality feedback (<a href=\"https:\/\/doi.org\/10.1080\/02602938.2019.1599815\" target=\"_blank\" rel=\"noopener\">Henderson, Ryan &amp; Phillips, 2019<\/a>).<\/li>\n<\/ul>\n<p>Open assessments provide the mechanism for students to build this judgement. Marking and feedback, in turn, contribute directly to students\u2019 ability to produce work and to recognise and critique its quality. For subjects where students\u2019 capabilities are going to be assured through a secure (lane 1) assessment, providing opportunities to practice and test their understanding through open assessments is key for developing evaluative judgement \u2013 i.e. being responsible for knowing how they are going. For subjects where production and creation dominate, students need to be able to judge how good their own work is and how it reflects their own standards. This will help students develop the ability to judge quality, an important characteristic when working with AI as to not be seduced or content to just accept its output.<\/p>\n<p>If you are a marker or interested in more concrete guidance on how to apply these principles in practice, the companion guide outlines how to approach marking, rubrics, and feedback in open assessments.<\/p>\n<h2>Marking and feedback for open assessments in practice<\/h2>\n<p>Marking open assessments involves a shift in emphasis, not a loss of rigour. Importantly, all assessments including their marking and feedback need to be aligned with the learning outcomes. Assessment criteria and hence feedback should concentrate on what the students are expected to be able to do and the development and help they need to get there. There is no burden of assuring that learning &#8211; or policing the use of AI. In open assessments, if we build in strong constructive alignment, the right opportunities for feedback, and assessments, we incentivise students to develop the responsibility for learning that will be assured in subsequent secure assessments. These connections should be deliberate and clearly signposted to students.<\/p>\n<p>Open assessment marking and feedback therefore considers:<\/p>\n<ul>\n<li><strong>Looking beyond polish.<\/strong> AI\u2011supported fluency should increasingly be expected, so the focus should shift to conceptual connection, reasoning, choices made, and responsiveness to context (<a href=\"https:\/\/www.hup.harvard.edu\/books\/9780674710016\" target=\"_blank\" rel=\"noopener\">Bruner, 1976<\/a>; <a href=\"https:\/\/doi.org\/10.1111\/j.2044-8279.1976.tb02980.x\" target=\"_blank\" rel=\"noopener\">Marton &amp; S\u00e4lj\u00f6, 1976<\/a>), including how students shape, adapt, and evaluate AI\u2011generated material.<\/li>\n<li><strong>Using rubric criteria as learning tools. <\/strong>Criteria should make standards visible and discussable, not just operationalise grading (<a href=\"https:\/\/doi.org\/10.1080\/0969594X.2012.714742\" target=\"_blank\" rel=\"noopener\">Sadler, 2013<\/a>). Where appropriate, criteria can also signal expectations about judicious and critical use of AI within the discipline.<\/li>\n<li><strong>Focusing on growth and development. <\/strong>Open assessments are \u2018assessments <em>for<\/em> learning\u2019, so try to gauge and encourage the process of becoming a disciplinary expert. Design rubrics to reward reflection, iteration, thinking, processing, and growth, as aligned to the learning outcomes of the unit and program.<\/li>\n<li><strong>Treating feedback as preparatory.<\/strong> Feedback should deliberately point forward to future tasks, including secure assessments, rather than act as a summative verdict.<\/li>\n<li><strong>Keeping the program view in mind.<\/strong> Open assessments do not need to do everything. Their strength lies in what they contribute <em>before<\/em> assurance is required. They should, however, be constructively aligned with the program (i.e. course or major) to maximise relevance and motivation. This should include alignment with how AI is used in disciplinary and professional practice. For example, feedback might explicitly comment on how effectively a student has prioritised ideas, justified decisions, or adapted AI\u2011generated material to the task context.<\/li>\n<li><strong>Help students see the alignment.<\/strong> Beyond assuring ourself that our programs are aligned, we also need to help our students see that effort in open assessments will be rewarded in meeting the program learning outcomes and in performing well in secure assessment.<\/li>\n<\/ul>\n<p>This shift in emphasis for marking also places greater importance on clear, judgement\u2011focused rubric criteria and opportunities for marker calibration, so that expectations about quality are shared and visible.\u201d<\/p>\n<h2>A cultural shift<\/h2>\n<p>Generative AI has not broken assessment, but <strong>it has forced us to confront assumptions we have long tolerated about what products and artefacts can legitimately tell us<\/strong>. The two lane model responds by shifting responsibility for assurance across programs and by re\u2011focussing feedback and the development of evaluative judgement as the mechanism of learning within open assessments.<\/p>\n<p>In turn, we need to help students navigate this approach so that they maximise opportunities for learning, feedback and improvement before they meet assurance of learning tasks.<\/p>\n<h2>Related resource<\/h2>\n<ul>\n<li><a href=\"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/guide-for-markers-of-open-assessments\/\" target=\"_blank\" rel=\"noopener\"><em>Guide for Markers of Open Assessments<\/em><\/a><\/li>\n<\/ul>\n<h2>References<\/h2>\n<p>Boud, D., Ajjawi, R., Dawson, P., &amp; Tai, J. (Eds.). (2018). Developing evaluative judgement in higher education: Assessment for knowing and producing quality work. Routledge.<\/p>\n<p>Bruner, J. S. (1976). The process of education (Revised ed.). Harvard University Press.<\/p>\n<p>Carless, D., &amp; Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. <em>Assessment &amp; Evaluation in Higher Education<\/em>, 43(8), 1315\u20131325. <a href=\"https:\/\/doi.org\/10.1080\/02602938.2018.1463354\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/02602938.2018.1463354<\/a><\/p>\n<p>Fawns, T., Boud, D., &amp; Dawson, P. (2026). Identifying what our students have learned: a framework for practical assessment validation.\u00a0<i>Assessment &amp; Evaluation in Higher Education<\/i>, 1\u201317. <a href=\"https:\/\/doi.org\/10.1080\/02602938.2026.2620053\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/02602938.2026.2620053<\/a><\/p>\n<p>Henderson, M., Ryan, T., &amp; Phillips, M. (2019). The challenges of feedback in higher education. <em>Assessment &amp; Evaluation in Higher Education<\/em>, 44(8), 1237\u20131252. <a href=\"https:\/\/doi.org\/10.1080\/02602938.2019.1599815\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/02602938.2019.1599815<\/a><\/p>\n<p>Marton, F., &amp; S\u00e4lj\u00f6, R. (1976). On qualitative differences in learning: I\u2014Outcome and process. <em>British Journal of Educational Psychology<\/em>, 46(1), 4\u201311. <a href=\"https:\/\/doi.org\/10.1111\/j.2044-8279.1976.tb02980.x\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1111\/j.2044-8279.1976.tb02980.x<\/a><\/p>\n<p>Molloy, E., Boud, D., &amp; Henderson, M. (2020). Developing a learning centred framework for feedback literacy. <em>Assessment &amp; Evaluation in Higher Education<\/em>, 45(4), 527\u2013540. <a href=\"https:\/\/doi.org\/10.1080\/02602938.2019.1667955\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/02602938.2019.1667955<\/a><\/p>\n<p>Sadler, D. R. (1989). Formative assessment and the design of instructional systems. <em>Instructional Science<\/em>, 18(2), 119\u2013144. <a href=\"https:\/\/doi.org\/10.1007\/BF00117714\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1007\/BF00117714<\/a><\/p>\n<p>Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. <em>Assessment &amp; Evaluation in Higher Education<\/em>, 35(5), 535\u2013550. <a href=\"https:\/\/doi.org\/10.1080\/02602930903541015\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/02602930903541015<\/a><\/p>\n<p>Sadler, D. R. (2013). Assuring academic achievement standards: from moderation to calibration.\u00a0<i>Assessment in Education: Principles, Policy &amp; Practice<\/i>,\u00a0<i>20<\/i>(1), 5\u201319. <a href=\"https:\/\/doi.org\/10.1080\/0969594X.2012.714742\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1080\/0969594X.2012.714742<\/a><\/p>\n<p>Tai, J., Ajjawi, R., Boud, D., Dawson, P., &amp; Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. <em>Higher Education<\/em>, 76(3), 467\u2013481. <a href=\"https:\/\/doi.org\/10.1007\/s10734-017-0220-3\" target=\"_blank\" rel=\"noopener\">https:\/\/doi.org\/10.1007\/s10734-017-0220-3<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The ability and availability of generative AI tools &#8212; which can complete many of our assessments, including quizzes, assignments, reports, reflections and presentations, potentially&#8230;<\/p>\n","protected":false},"author":18,"featured_media":25988,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[56],"tags":[],"coauthors":[462,463,573],"class_list":["post-25896","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-events","post-item","post-even"],"_links":{"self":[{"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/posts\/25896","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/comments?post=25896"}],"version-history":[{"count":17,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/posts\/25896\/revisions"}],"predecessor-version":[{"id":25987,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/posts\/25896\/revisions\/25987"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/media\/25988"}],"wp:attachment":[{"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/media?parent=25896"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/categories?post=25896"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/tags?post=25896"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/educational-innovation.sydney.edu.au\/teaching@sydney\/wp-json\/wp\/v2\/coauthors?post=25896"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}