The University’s ‘two-lane approach’ to assessment has been well received at the institution, nationally, and internationally. It is a simple model that recognises the need for assessment within universities to simultaneously assure the attainment of learning outcomes, and help students learn in a world where generative AI is quickly becoming ubiquitous. Here’s a quick summary:
Lane 1 | Lane 2 | |
Role of assessment | Assessment of learning | Assessment for and as learning |
Level of operation | Mainly at program level | Mainly at unit level |
Assessment security | Secured, in person | ‘Open’ / unsecured |
Role of generative AI | May or may not be allowed by examiner | As relevant, use of AI scaffolded & supported |
TEQSA alignment | Principle 2 – forming trustworthy judgements of student learning | Principle 1 – equip students to participate ethically and actively in a society pervaded with AI |
Examples | In person interactive oral assessments; viva voces; contemporaneous in-class assessments and skill development; tests and exams. | AI to provoke reflection, suggest structure, brainstorm ideas, summarise literature, make content, suggest counterarguments, improve clarity, provide formative feedback, etc |
Over the last year, we’ve been speaking with educators here and across the educational community about assessment, and how ‘lane 1’ and ‘lane 2’ assessments might work. This living FAQ is a compilation of many of the insightful questions that people have asked.
Clarifications on lane 1 and lane 2
‘Full’ lane 2 assessments are those that embrace principle 1 of the TEQSA guidelines for assessment reform, namely assessments that thoughtfully incorporate AI to prepare students for a world where this technology is ubiquitous. Lane 2 is where learning will be developed in ways authentic to the discipline and, where relevant, contemporary workplaces. Just as we conduct our teaching, research and other work with access to the tools to assist us, lane 2 assessments develop and drive learning that is authentic. Lane 2 assessments emphasise that the majority of assessment should be for and as learning, undertaken by students whenever and wherever they are studying.
Acknowledging that an assessment is ‘open’ or ‘not secured’ (i.e. not lane 1) is not fully merging into lane 2, although it is a step in the right direction. In these situations, we may want to call this ‘towards lane 2’.
Assessment of learning (lane 1) is necessary but is often under undertaken in less authentic contexts such as exam halls or in front of a panel of teachers. The power of generative AI tools makes securing assessments even harder, and more intrusive and artificial.
Lane 2 is primarily about learning the ‘stuff, skills, and soul’ of the unit, program, and discipline. As the TEQSA guidelines point out though, in a world where AI is ubiquitous this necessarily involves the use of AI. However, learning about and how to use AI is not the primary goal of lane 2 assessments.
Fundamentally, lane 2 is the assessment for and as learning. Our role as educators has always been to steer and support students in their approaches, choice of tools, and the processes to use to understand material, ideas, and theories, develop skills, and create knowledge. Earlier technological advances such as the written word, printing press, calculator, and internet have changed the way students learn and educators teach. The focus on generative AI for lane 2 assessment reflects their newness and power rather its purpose. Learning how to use AI productively and responsibly should be part of your course and assessment design simply because these tools now form part of the ways in which each discipline now operates. Where the tools are not relevant or prevents learning, our role as educators is to help students know that is the better choice.
The emergence of generative AI changes the ways in which students learn disciplinary knowledge and skills, just as earlier technological advances have done. We will continue to set tasks to help students learn – from fundamental facts, skills and concepts to critical thinking processes and knowledge creation. Just as before, we need to securely evaluate that students have learnt how to do this at key points in their program, via lane 1 assessments.
As covered in the previous answer, generative AI will and has already changed the ways in which we work and the nature of disciplinary skills and knowledge. As a new technology, there is a period of adjustment and educators have a role in informing and helping students use it. However, as the tools become more embedded in our everyday tools and the ways in which we research and create knowledge, our need to narrowly help students use AI will be replaced by how we help students select and use the appropriate tools for our discipline. Equipping our students with evaluative judgement and understanding of how our discipline generates, tests, and creates new knowledge with all of the tools at their disposal are still the most important attributes for our graduates.
Not quite. Lane 1 is where students’ attainment of learning outcomes is checked in a secure environment. This may or may not involve the use of AI. In some situations, it will be perfectly legitimate and realistic that program learning outcomes need to include the productive and responsible use of AI. An example we use often is architecture: image-based generative AI tools are already used in the industry to enable more productive ideation, so it follows that architecture programs should help students engage with these tools, and therefore learning outcomes would be aligned to this. A lane 1 assessment (for example, an interactive oral assessment) where students are in an authentic 1:1 situation (perhaps a mock client-architect interaction) could be perfectly valid. Many lane 1 assessments, though, are likely to prohibit the use of AI in secured environments.
Assessment and curriculum design
To avoid confusion, the term “programmatic assessment” is deliberately not used in the assessment principles in the Coursework Policy or Assessment Procedures as it is associated with existing and extensive usage, in medical education in particular, and a detailed area of scholarship with. For our programs which are competency based, such as dentistry, medicine and veterinary science, a fully programmatic approach may be warranted. In most of our degrees, though, it is not. Program-level assessment and programmatic assessment should be viewed as mutually inclusive and beneficial. As detailed by Baartman et al., (2022) assessment design choices at the program level requires educator to make “diverse design choices … fitting their own context”.
The assessment principles in our Coursework Policy follow the TEQSA ‘Assessment reform for the age of artificial intelligence’ and the Group of Eight principles on the use of generative artificial intelligence in instead referring to program-level assessment. These all in turn follow from the Higher Education Standards Framework and from accrediting bodies that require demonstration of learning across a course.
Assessment principle 5 in the Coursework Policy states that “Assessment practices must be integrated into program design”. This principle requires that:
- assessment and feedback are integrated to support learning across units of study, courses, course components and year groups;
- assessment and feedback are designed to supports each student’s development of knowledge, skills and qualities from enrolment to graduation;
- students’ learning attainment can be validated across their program at relevant progression points and before graduation;
Program-level assessment design thus requires alignment of assessment across units (a), with deliberate consideration of individual development and validation at relevant points in the course as well as before graduation.
Because of the effect of generative AI of many existing types, Principle 6 also requires this is performed in a “trustworthy way” with “supervised assessments are designed to assure learning in a program”.
“Program” here includes both courses and their components, such as majors and specialisations. The requirements of Principle 5 though mean that validation at regular points must occur. In degrees like the BSc, BLAS, BA and BCom, this could mean once a student has completed the degree core, the first year or the second year. The points at which it is most useful to validate learning, through lane 1 assessment, are best decided by the course and component coordinators.
It is recommended that secured assessments be distributed throughout a student’s journey, not just at the end before graduation. If the structure of the course or the component is suitable, this could integrate knowledge across the program, as required by many program level learning outcomes. Equally it could do so by assessing the program level outcomes applied in different contexts in more than one unit.
However, as noted above, assessment principle 5 in the Coursework Policy requires consideration of each student’s development and validation at relevant progression points. One model could be a lane 1 hurdle task at the end of each year, progressing from an oral discussion to a final exam with proper support for students in terms of practice, access to multiple attempts, adjustments etc.
Over time, the assessment principles will drive the integration and alignment of unit outcomes and assessment more deliberately. This will ensure students build contemporary capabilities and disciplinary knowledge in structured ways and assurance/assessment of learning does not displace assessment for learning.
No. Even in the short time since the release of ChatGPT (originally running off GPT-3.5 which has now been superseded by a number of newer AI models), it is already not possible to restrict AI use in assessments which are not secured in face-to-face environments. It is also not possible to reliably or equitably detect that it has been used, either because a student has access to the latest technology, because they know how to change the raw output, or can afford to pay someone to do this for them. Any unenforceable restriction damages assessment validity so a scale or traffic light approach of telling students that they can only use AI for certain purposes, or use certain AI tools, is untenable. A clearer and more realistic approach is to consider the use of AI in lane 2 assessments as a menu, where students can pick anything and it is our role as educators to guide them which options are more delectable (better for learning).
We do though need to assure our students meet the learning outcomes of their programs and we need to do that through appropriately secured lane 1 assessments.
Selecting which tool to use for a task, using it well to perform that task, and evaluating the final output requires skill, disciplinary knowledge, and evaluative judgement. These skills are core to program-level outcomes and must be developed in the units of study that form its curriculum. Educators have always given students a choice of resources and tools, and explained their strengths and weaknesses. In recent years, students may have become used to thinking of units as being separate entities with their role in developing program-level outcomes unclear. It is important that students realise through our conversations with them and through our program and assessment design that their skills and knowledge will ultimately be tested with controlled, or no, access to generative AI in lane 1 assessments.
Crudely – if students choose to bypass learning in their assessment for learning, they will fail the assessments of that learning.
The answer to this question is in part pragmatic and in part about the purpose of assessment in education.
Generative AI tools are an intimate part of our productivity and communication tools and their use cannot be reliably detected. Generative AI is already being added to ‘wearables’ such as glasses, pins and hearing assistance aids. It is not only sci-fi writers who are beginning to think of a future where AI implants are available. Genuinely securing assessment is already expensive, intrusive, and inconvenient for the assessed and assessor. Crudely – there is a high workload associated with lane 1 assessments and this will only grow over time.
Assessment drives and controls behaviours that are critical to the process of learning. Each of our programs needs to include an appropriate mix of assessment for learning and assessment of learning, noting that the former includes using generative AI productively and responsibly in the context of the discipline. For programs thought of as a series of core, elective, and capstone units of study, this may mean that (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.
As a University and sector, we need to discuss the details of how the two-lane approach will apply to each program. We may decide on policies and approaches that apply to all, or we may be required to do this by external accrediting agencies. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.
As we wrestle with and adapt to a new approach to assessment, one heuristic may be the amount of effort that students need to apply to particular assessment tasks. Presumably these tasks will be meaningful and lead to deeper learning, i.e. assessment for and as learning. It follows that you likely want to more heavily weight lane 2 assessments.
As discussed in the previous answer, within a program: (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.
The Open Learning Environment, for example, contributes to broadening the knowledge and skills of students in our liberal studies degree but does not have specified learning outcomes. Conversations are underway whether the 2 credit point units should be pass/fail with only lane 2 assessments.
In other components of degrees, such as minors and majors, program leaders are thinking about the most appropriate mix. It might be that, for example, the learning outcomes for a major should all be assessed through lane 1 assessments in a capstone or through assessments that sit as extra to the contributing units of study.
In the context of the TEQSA guidelines for assessment reform, it’s important to consider that these “trustworthy judgements about student learning in a time of AI requires multiple, inclusive, and contextualised approaches to assessment” (emphasis added).
Lane 1 assessments assure the learning outcomes of the program (e.g. degree, major, specialisation or perhaps even level of study) rather than of the individual units, following the Higher Education Standards Framework (Threshold Standards) 2021 legislation which emphasises outcomes at a course level. These learning outcomes are available for most degree components and published in the Handbook. Many programs, particularly those with external accreditation, also map and align the outcomes of their units to these. The move to the two-lane approach to assessment may require some rethinking of how our curriculum fits together, rewriting of these program level outcomes, and even reconsideration of whether we only assess in individual units of study.
We know that marks drive student behaviour and promote effort. If we want students to effortfully engage in lane 2 assessments, we should recognise this by providing marks towards these assessment activities. Since assessment can engage the process of learning, aligning activities to lane 2 assessment is crucial even to the extent that they may become effectively the same in our heads and in those of our students. It may be appropriate that the assessments and even the units become about satisfying requirements rather than for marks (i.e. pass/fail).
Units with only lane 2 assessments contribute to the learning outcomes of the program. Lane 2 assessments for learning drive and control the processes by which student develop disciplinary and generic knowledge and skills, including but not limited to the ability to use generative AI productively and responsibly. Units with only lane 2 assessments may have specific roles in a program – e.g. developing particular skills or dispositions – which students also use later or elsewhere in units with lane 1 assessments. In a STEM program, for example, a series of units might develop experimental and analytical skills through lane 2 assessments which are then assured through a capstone research project unit with an oral exam and supervised skills assessments rather like Honours projects and the PhD. In a humanities program, students might be exposed to different methods for critical analysis of texts in lane 2-only units, with a later capstone unit requiring a writing task assured through a series of supervised short tasks and an interactive oral assessment.
Large cohort units tend to occur in the initial years of our programs when students need to acquire foundational knowledge and skills. The change to the two-lane approach to assessment is a good opportunity to think about how these units contribute to the outcomes of the programs in which they sit. For example, knowledge of the fundamental physical and biological sciences and experimental approaches is important across many disciplines across STEM but these could be assessed without a requirement for grades.
At the moment, many of these units already have lane 1 assessments in the form of highly weighted exams. As the foundational knowledge and skills in these programs also includes how to select and use appropriate learning tools and resources, these units will also need to include lane 2 assessments. Accordingly, it will be important for unit and program coordinators to consider reducing the weighting of lane 1 assessments in these units and considering, for example, the use of a year-level lane 1 assessment instead of many lane 1 assessments within each unit.
The emergence of powerful generative AI tools has already started to change the ways in which we work and in which disciplines create and extend knowledge. It makes sense that program level outcomes, which describe what our graduates know and can do, are likely to change. Law practices, media companies, banks, and other industries are already requiring graduates to be skilled in the effective use of generative AI tools. Our PhD students will similarly need to be equipped to perform research and create knowledge using current tools and to adapt quickly to new ones.
Alongside rethinking program level outcomes to reflect these changes in technology, the two-lane approach requires detailed reconsideration of how unit-level outcomes are written, aligned, and mapped to the higher level ones.
Additionally, the rapidly expanding abilities of AI are an important moment for higher education to reconsider what it is that we want students to learn. For a long time, higher education has focused on the acquisition and reproduction of content knowledge, only more recently reprioritising towards skills. It’s possible that a more drastic reconsideration of learning outcomes is in order, in a world with this capable co-intelligence.
Sydney currently has relatively few pass/fail units, but there is some evidence from here and elsewhere that this grading scheme is beneficial particularly where learning outcomes are predominantly focused on skills and competencies. Lane 2 assessments focus attention of assessment for learning, including acquiring skills and critical thinking processes, rather than grading the result of such tasks. It may make sense for such assessments to be pass/fail only with no other mark applied.
We need to discuss as a University and sector, the details of how the two-lane approach will apply to each program. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.
Assessment validity
For semester 2 2024, our Academic Integrity Policy has not been changed, so coordinators still need to state their permission for students to use generative AI. By January 2025, the University aims to shift its policy setting to assume the use of AI in ‘open’ or ‘unsecured’ assessment – meaning that coordinators will not be able to prohibit its use in such assessment.
As noted above under ‘Assessment and curriculum design‘, it is already not possible to restrict or detect AI use in assessments which are not secured by face-to-face supervision, and it is not possible to reliably or equitably detect that it has been used. Any unenforceable restriction such as stating that AI is not allowed damages assessment validity and is untenable.
With AI tools becoming part of everyday productivity tools we all use, including the availability of Microsoft’s Copilot for Web as part of the software package that all enrolled students receive for free, it is important for coordinators to discuss their expectations with students.
As noted above under ‘Assessment and curriculum design‘, lane 2 assessments can powerfully engage students in the process of developing disciplinary and generic knowledge and skills, including but not limited to the ability to select and use generative AI tools productively and responsibly. Whilst applying marks to such assessments is motivating for students, the focus of marking should be overwhelmingly on providing students students with feedback on both the product of the task and on the process which they used.
As noted above, it may make sense for such assessments to be pass/fail only with no other mark applied.
Plugins exist for all of the common browsers which give students undetectable tools that leverage generative AI to answer multiple choice and short answer questions in Canvas and other assessment systems. Testing internationally suggests that these generative AI tools are capable of at least passing such assessments, often obtaining high marks and outscoring humans even on sophisticated tests.
Any assessment which is not supervised in a secure face-to-face environment is ‘open’ or ‘unsecured’ and hence a de facto (unscaffolded) “towards lane 2” assessment. This includes online quizzes, take-home assignments, and orals or presentations held using web-conferencing software.
Because collusion or plagiarism are not graduate qualities we wish to develop. The productive and responsible use of generative AI will be a key set of knowledge, skills, and dispositions that our graduates will need, much like our graduates need the knowledge, skills, and dispositions to use the internet, to work with others, to influence, and to lead.
Supporting students and staff
The shift to the two-lane approach will require rethinking of both assessment and curriculum. It will take at least a couple of years for us to embed the approach in all programs and longer still to perfect it. However, this approach is designed to be future proofed. It will not change as the technology improves, unlike approaches that suggest how to ‘AI-proof’ or ‘de-risk’ ‘open’ or ‘unsecured’ assessments. As this happens, lane 2 assessments will need to be updated and the security of lane 1 assessment will become more difficult but the roles of the two lanes will be maintained.
As noted in each section above, the two-lane approach needs to thought of at the program level. If done well, it will reduce staff and student workload through reduction in the volume of assessment and marking with most assessments in lane 2: fewer exam papers and take-home assignments to prepare and mark and more in-class assessment. The focus on lane 2 will also change the need for special consideration, simple extensions, and academic adjustments through the universal design of assessments and reduced emphasis on take-home assignments and grading.
In 2024, many of the Strategic Education Grants were awarded to projects focussed on assessments aligned to the two-lane approach including a substantial project of future models for the creative writing assignments. It is the intention to expand this scheme for 2025 with assessment redesign being the highest priority across the institution.
Through events such as the Sydney Teaching Symposium and the AI in Higher Education Symposium, and our national and international networks, we will continue to highlight exemplars from across the disciplines. Professional learning, workshops, resources, and consultations are available and will continue to develop through Educational Innovation and faculty teams.
We have worked with student partners to develop the AI in Education resource (publicly available at https://bit.ly/students-ai) that provides advice, guidance, and practical examples of how students might use generative AI to improve learning, engage with assessments, and progress in their careers. Staff have also found this site informative.
The AI × Assessment menu is the preferred approach to scaffolding student use of AI in lane 2 assessments. The menu approach emphasises that there are many generative AI tools, and many ways to use these tools, that can support student learning in the process of completing assessments. The AI in Education resource linked above is being updated so that its advice aligns with the assessment menu.
We are working to curate examples of effective lane 2 assessments across the University to share more broadly to support colleagues in developing their own assessment designs. There is also a University-wide AI in Education community of practice that meets regularly to share updates and practices collegially. You can also stay up to date with key AI developments in our Teaching@Sydney coverage.
A true lane 2 assessment will involve students using AI to support their learning and develop critical AI literacies. Fundamentally, a lane 2 assessment should aim to develop and assess disciplinary knowledge, skills, and dispositions, as any good assessment would. You may wish to focus lane 2 assessment rubrics on higher order skills such as application, analysis, evaluation, and creation. You may also wish to include how students are productively and responsibly engaging with generative AI in a disciplinary-relevant manner. We have rubric suggestions in appendix 2 and 3 of our advice guide (which says 2023 but still remains current).
State-of-the-art generative AI tools are not cheap, and most are only available on paid subscription plans that many students will not be able to afford. This access gap, coupled with the variable engagement with generative AI between independent and government high schools, means that students have varied access to and abilities with AI. The AI in Education site (publicly available at https://bit.ly/students-ai) aims to help address this divide, as does the institutional access to powerful generative AI tools such as Copilot for Web and Cogniti.