Frequently asked questions about the two-lane approach to assessment in the age of AI

Adobe Stock, used under licence

The University’s ‘two-lane approach’ to assessment has been well received at the institution, nationally, and internationally. It is a simple model that recognises the need for assessment within universities to simultaneously assure the attainment of learning outcomes, and help students learn in a world where generative AI is quickly becoming ubiquitous.

Over the last year, we’ve been speaking with educators here and across the educational community about assessment, and how ‘lane 1’ and ‘lane 2’ assessments might work. This living FAQ is a compilation of many of the insightful questions that people have asked.

Clarifications on lane 1 and lane 2

Isn't 'lane 2' just 'unsecured'? Why not call it that?

‘Full’ lane 2 assessments are those that embrace principle 1 of the TEQSA guidelines for assessment reform, namely assessments that thoughtfully incorporate AI to prepare students for a world where this technology is ubiquitous. Lane 2 is where learning will be developed in ways authentic to the discipline and, where relevant, contemporary workplaces. Just as we conduct our teaching, research and other work with access to the tools to assist us, lane 2 assessments develop and drive learning that is authentic. Lane 2 assessments emphasise that the majority of assessment should be for and as learning, undertaken by students whenever and wherever they are studying.

Acknowledging that an assessment is not secured (i.e. not lane 1) is not fully merging into lane 2, although it is a step in the right direction. In these situations, we may want to call this ‘towards lane 2’.

Assessment of learning (lane 1) is necessary but is often under undertaken in less authentic contexts such as exam halls or in front of a panel of teachers. The power of generative AI tools makes securing assessments even harder, and more intrusive and artificial.

Is lane 2 just about learning how to use AI? How do students learn disciplinary knowledge and skills?

Lane 2 is primarily about learning the ‘stuff, skills, and soul’ of the unit, program, and discipline. As the TEQSA guidelines point out though, in a world where AI is ubiquitous this necessarily involves the use of AI. However, learning about and how to use AI is not the primary goal of lane 2 assessments.

Fundamentally, lane 2 is the assessment for and as learning. Our role as educators has always been to steer and support students in their approaches, choice of tools, and the processes to use to understand material, ideas, and theories, develop skills, and create knowledge. Earlier technological advances such as the written word, printing press, calculator, and internet have changed the way students learn and educators teach. The focus on generative AI for lane 2 assessment reflects their newness and power rather its purpose. Learning how to use AI productively and responsibly should be part of your course and assessment design simply because these tools now form part of the ways in which each discipline now operates. Where the tools are not relevant or prevents learning, our role as educators is to help students know that is the better choice.

The emergence of generative AI changes the ways in which students learn disciplinary knowledge and skills, just as earlier technological advances have done. We will continue to set tasks to help students learn – from fundamental facts, skills and concepts to critical thinking processes and knowledge creation. Just as before, we need to securely evaluate that students have learnt how to do this at key points in their program, via lane 1 assessments.

If lane 2 is where we support and scaffold the use of AI, aren't we just assessing students' use of AI and not disciplinary skills/knowledge?

As covered in the previous answer, generative AI will and has already changed the ways in which we work and the nature of disciplinary skills and knowledge. As a new technology, there is a period of adjustment and educators have a role in informing and helping students use it. However, as the tools become more embedded in our everyday tools and the ways in which we research and create knowledge, our need to narrowly help students use AI will be replaced by how we help students select and use the appropriate tools for our discipline. Equipping our students with evaluative judgement and understanding of how our discipline generates, tests, and creates new knowledge with all of the tools at their disposal are still the most important attributes for our graduates.

Assessment and curriculum design

Can I apply a scale or traffic lights to how AI is used in lane 2?

No. Even in the short time since the release of ChatGPT (originally running off GPT-3.5 which has now been superseded by a number of newer AI models), it is already not possible to restrict AI use in assessments which are not secured in face-to-face environments. It is also not possible to reliably or equitably detect that it has been used, either because a student has access to the latest technology, because they know how to change the raw output, or can afford to pay someone to do this for them. Any unenforceable restriction damages assessment validity so a scale or traffic light approach of telling students that they can only use AI for certain purposes, or use certain AI tools, is untenable. A clearer and more realistic approach is to consider the use of AI in lane 2 assessments as a menu, where students can pick anything and it is our role as educators to guide them which options are more delectable (better for learning).

We do though need to assure our students meet the learning outcomes of their programs and we need to do that through appropriately secured lane 1 assessments.

It's really important for students to use their brains in this assessment. How does this work with lane 2?

Selecting which tool to use for a task, using it well to perform that task, and evaluating the final output requires skill, disciplinary knowledge, and evaluative judgement. These skills are core to program-level outcomes and must be developed in the units of study that form its curriculum. Educators have always given students a choice of resources and tools, and explained their strengths and weaknesses. In recent years, students may have become used to thinking of units as being separate entities with their role in developing program-level outcomes unclear. It is important that students realise through our conversations with them and through our program and assessment design that their skills and knowledge will ultimately be tested with controlled, or no, access to generative AI in lane 1 assessments.

Crudely – if students choose to bypass learning in their assessment for learning, they will fail the assessments of that learning.

Should I make all my assessments 'lane 1'?

The answer to this question is in part pragmatic and in part about the purpose of assessment in education.

Generative AI tools are an intimate part of our productivity and communication tools and their use cannot be reliably detected. Generative AI is already being added to ‘wearables’ such as glasses, pins and hearing assistance aids. It is not only sci-fi writers who are beginning to think of a future where AI implants are available. Genuinely securing assessment is already expensive, intrusive, and inconvenient for the assessed and assessor. Crudely – there is a high workload associated with lane 1 assessments and this will only grow over time.

Assessment drives and controls behaviours that are critical to the process of learning. Each of our programs needs to include an appropriate mix of assessment for learning and assessment of learning, noting that the former includes using generative AI productively and responsibly in the context of the discipline. For programs thought of as a series of core, elective, and capstone units of study, this may mean that (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.

What weighting should we give lane 1 and lane 2 assessments?

As a University and sector, we need to discuss the details of how the two-lane approach will apply to each program. We may decide on policies and approaches that apply to all, or we may be required to do this by external accrediting agencies. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.

As we wrestle with and adapt to a new approach to assessment, one heuristic may be the amount of effort that students need to apply to particular assessment tasks. Presumably these tasks will be meaningful and lead to deeper learning, i.e. assessment for and as learning. It follows that you likely want to more heavily weight lane 2 assessments.

Should each unit have at least one lane 1 assessment?

As discussed in the previous answer, within a program: (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.

The Open Learning Environment, for example, contributes to broadening the knowledge and skills of students in our liberal studies degree but does not have specified learning outcomes. Conversations are underway whether the 2 credit point units should be pass/fail with only lane 2 assessments.

In other components of degrees, such as minors and majors, program leaders are thinking about the most appropriate mix. It might be that, for example, the learning outcomes for a major should all be assessed through lane 1 assessments in a capstone or through assessments that sit as extra to the contributing units of study.

In the context of the TEQSA guidelines for assessment reform, it’s important to consider that these “trustworthy judgements about student learning in a time of AI requires multiple, inclusive, and contextualised approaches to assessment” (emphasis added).

How can a few lane 1 assessments possibly assess all the learning outcomes across many units?

Lane 1 assessments assure the learning outcomes of the program (e.g. degree, major, specialisation or perhaps even level of study) rather than of the individual units, following the Higher Education Standards Framework (Threshold Standards) 2021 legislation which emphasises outcomes at a course level. These learning outcomes are available for most degree components and published in the Handbook. Many programs, particularly those with external accreditation, also map and align the outcomes of their units to these. The move to the two-lane approach to assessment may require some rethinking of how our curriculum fits together, rewriting of these program level outcomes, and even reconsideration of whether we only assess in individual units of study.

Why bother having lane 2 assessments at all? Shouldn't they just be learning activities that don't count for marks?

We know that marks drive student behaviour and promote effort. If we want students to effortfully engage in lane 2 assessments, we should recognise this by providing marks towards these assessment activities. Since assessment can engage the process of learning, aligning activities to lane 2 assessment is crucial even to the extent that they may become effectively the same in our heads and in those of our students. It may be appropriate that the assessments and even the units become about satisfying requirements rather than for marks (i.e. pass/fail).

What is the point of units where there are only lane 2 assessments?

Units with only lane 2 assessments contribute to the learning outcomes of the program. Lane 2 assessments for learning drive and control the processes by which student develop disciplinary and generic knowledge and skills, including but not limited to the ability to use generative AI productively and responsibly. Units with only lane 2 assessments may have specific roles in a program – e.g. developing particular skills or dispositions – which students also use later or elsewhere in units with lane 1 assessments. In a STEM program, for example, a series of units might develop experimental and analytical skills through lane 2 assessments which are then assured through a capstone research project unit with an oral exam and supervised skills assessments rather like Honours projects and the PhD. In a humanities program, students might be exposed to different methods for critical analysis of texts in lane 2-only units, with a later capstone unit requiring a writing task assured through a series of supervised short tasks and an interactive oral assessment.

How do we run lane 1 assessments for large cohort units?

Large cohort units tend to occur in the initial years of our programs when students need to acquire foundational knowledge and skills. The change to the two-lane approach to assessment is a good opportunity to think about how these units contribute to the outcomes of the programs in which they sit. For example, knowledge of the fundamental physical and biological sciences and experimental approaches is important across many disciplines across STEM but these could be assessed without a requirement for grades.

At the moment, many of these units already have lane 1 assessments in the form of highly weighted exams. As the foundational knowledge and skills in these programs also includes how to select and use appropriate learning tools and resources, these units will also need to include lane 2 assessments. Accordingly, it will be important for unit and program coordinators to consider reducing the weighting of lane 1 assessments in these units and considering, for example, the use of a year-level lane 1 assessment instead of many lane 1 assessments within each unit.

Should we be rethinking our learning outcomes?

The emergence of powerful generative AI tools has already started to change the ways in which we work and in which disciplines create and extend knowledge. It makes sense that program level outcomes, which describe what our graduates know and can do, are likely to change. Law practices, media companies, banks, and other industries are already requiring graduates to be skilled in the effective use of generative AI tools. Our PhD students will similarly need to be equipped to perform research and create knowledge using current tools and to adapt quickly to new ones.

Alongside rethinking program level outcomes to reflect these changes in technology, the two-lane approach requires detailed reconsideration of how unit-level outcomes are written, aligned, and mapped to the higher level ones.

Additionally, the rapidly expanding abilities of AI are an important moment for higher education to reconsider what it is that we want students to learn. For a long time, higher education has focused on the acquisition and reproduction of content knowledge, only more recently reprioritising towards skills. It’s possible that a more drastic reconsideration of learning outcomes is in order, in a world with this capable co-intelligence.

Will there be more pass/fail units and a refocus on competencies as opposed to grades?

Sydney currently has relatively few pass/fail units, but there is some evidence from here and elsewhere that this grading scheme is beneficial particularly where learning outcomes are predominantly focused on skills and competencies. Lane 2 assessments focus attention of assessment for learning, including acquiring skills and critical thinking processes, rather than grading the result of such tasks. It may make sense for such assessments to be pass/fail only with no other mark applied.

We need to discuss as a University and sector, the details of how the two-lane approach will apply to each program. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.

Assessment validity

Can I still ban AI in lane 2 assessments?

For semester 2 2024, our Academic Integrity Policy has not been changed, so coordinators still need to state their permission for students to use generative AI. By January 2025, the University aims to shift its policy setting to assume the use of AI in unsecured assessment – meaning that coordinators will not be able to prohibit its use in such assessment.

As noted above under ‘Assessment and curriculum design‘, it is already not possible to restrict or detect AI use in assessments which are not secured by face-to-face supervision, and it is not possible to reliably or equitably detect that it has been used. Any unenforceable restriction such as stating that AI is not allowed damages assessment validity and is untenable.

With AI tools becoming part of everyday productivity tools we all use, including the availability of Microsoft’s Copilot for Web as part of the software package that all enrolled students receive for free, it is important for coordinators to discuss their expectations with students.

Is there any point in marking lane 2 assessments if students just use AI to make them?

As noted above under ‘Assessment and curriculum design‘, lane 2 assessments can powerfully engage students in the process of developing disciplinary and generic knowledge and skills, including but not limited to the ability to select and use generative AI tools productively and responsibly. Whilst applying marks to such assessments is motivating for students, the focus of marking should be overwhelmingly on providing students students with feedback on both the product of the task and on the process which they used.

As noted above, it may make sense for such assessments to be pass/fail only with no other mark applied.

What do we do about online quizzes? Are these lane 2 assessments?

Plugins exist for all of the common browsers which give students undetectable tools that leverage generative AI to answer multiple choice and short answer questions in Canvas and other assessment systems. Testing internationally suggests that these generative AI tools are capable of at least passing such assessments, often obtaining high marks and outscoring humans even on sophisticated tests.

Any assessment which is not supervised in a secure face-to-face environment is unsecured and hence a de facto (unscaffolded) “towards lane 2” assessment. This includes online quizzes, take-home assignments, and orals or presentations held using web-conferencing software.

Supporting students and staff

How do we make the transition to this new approach less painful?

The shift to the two-lane approach will require rethinking of both assessment and curriculum. It will take at least a couple of years for us to embed the approach in all programs and longer still to perfect it. However, this approach is designed to be future proofed. It will not change as the technology improves, unlike approaches that suggest how to ‘AI-proof’ or ‘de-risk’ unsecured assessments. As this happens, lane 2 assessments will need to be updated and the security of lane 1 assessment will become more difficult but the roles of the two lanes will be maintained.

As noted in each section above, the two-lane approach needs to thought of at the program level. If done well, it will reduce staff and student workload through reduction in the volume of assessment and marking with most assessments in lane 2: fewer exam papers and take-home assignments to prepare and mark and more in-class assessment. The focus on lane 2 will also change the need for special consideration, simple extensions, and academic adjustments through the universal design of assessments and reduced emphasis on take-home assignments and grading.

In 2024, many of the Strategic Education Grants were awarded to projects focussed on assessments aligned to the two-lane approach including a substantial project of future models for the creative writing assignments. It is the intention to expand this scheme for 2025 with assessment redesign being the highest priority across the institution.

Through events such as the Sydney Teaching Symposium and the AI in Higher Education Symposium, and our national and international networks, we will continue to highlight exemplars from across the disciplines. Professional learning, workshops, resources, and consultations are available and will continue to develop through Educational Innovation and faculty teams.

How do we support and scaffold students' use of AI generally, and in lane 2 assessments specifically?

We have worked with student partners to develop the AI in Education resource (publicly available at https://bit.ly/students-ai) that provides advice, guidance, and practical examples of how students might use generative AI to improve learning, engage with assessments, and progress in their careers. Staff have also found this site informative.

The AI × Assessment menu is the preferred approach to scaffolding student use of AI in lane 2 assessments. The menu approach emphasises that there are many generative AI tools, and many ways to use these tools, that can support student learning in the process of completing assessments. The AI in Education resource linked above is being updated so that its advice aligns with the assessment menu.

How do we design lane 2 assessments that engage with AI when we are struggling to keep up to date with AI's capabilities?

We are working to curate examples of effective lane 2 assessments across the University to share more broadly to support colleagues in developing their own assessment designs. There is also a University-wide AI in Education community of practice that meets regularly to share updates and practices collegially. You can also stay up to date with key AI developments in our Teaching@Sydney coverage.

How do you assess a lane 2 assessment, especially if we know they are using AI?

A true lane 2 assessment will involve students using AI to support their learning and develop critical AI literacies. Fundamentally, a lane 2 assessment should aim to develop and assess disciplinary knowledge, skills, and dispositions, as any good assessment would. You may wish to focus lane 2 assessment rubrics on higher order skills such as application, analysis, evaluation, and creation. You may also wish to include how students are productively and responsibly engaging with generative AI in a disciplinary-relevant manner. We have rubric suggestions in appendix 2 and 3 of our advice guide (which says 2023 but still remains current).

How do we address inequities in student abilities with AI?

State-of-the-art generative AI tools are not cheap, and most are only available on paid subscription plans that many students will not be able to afford. This access gap, coupled with the variable engagement with generative AI between independent and government high schools, means that students have varied access to and abilities with AI. The AI in Education site (publicly available at https://bit.ly/students-ai) aims to help address this divide, as does the institutional access to powerful generative AI tools such as Copilot for Web and Cogniti.

More from Adam Bridgeman

Innovation Week 2016: Education Events

The inaugural University of Sydney Innovation Week celebrates the best of our teaching and research...
Read More