The University’s ‘two-lane approach’ to assessment has been well received at the institution, nationally, and internationally. It is a simple model that recognises the need for assessment within universities to simultaneously assure the attainment of learning outcomes, and help students learn in a world where generative AI is quickly becoming ubiquitous. Here’s a quick summary:
Secure (Lane 1) | Open (Lane 2) | |
Role of assessment | Assessment of learning | Assessment for and as learning |
Level of operation | Mainly at program level | Mainly at unit level |
Assessment security | Secured, in person | ‘Open’ / unsecured |
Role of generative AI | May or may not be allowed by examiner | As relevant, use of AI scaffolded & supported |
TEQSA alignment | Principle 2 – forming trustworthy judgements of student learning | Principle 1 – equip students to participate ethically and actively in a society pervaded with AI |
Examples | In person interactive oral assessments; viva voces; contemporaneous in-class assessments and skill development; tests and exams. | AI to provoke reflection, suggest structure, brainstorm ideas, summarise literature, make content, suggest counterarguments, improve clarity, provide formative feedback, etc |
Over the last year, we’ve been speaking with educators here and across the educational community about assessment, and how ‘lane 1’ and ‘lane 2’ assessments might work. This living FAQ is a compilation of many of the insightful questions that people have asked. The additional questions from the all-staff webinar held on 27th November 2024 are marked with an asterisk (*).
Clarifications on lane 1 and lane 2
‘Full’ lane 2 assessments are those that embrace principle 1 of the TEQSA guidelines for assessment reform, namely assessments that thoughtfully incorporate AI to prepare students for a world where this technology is ubiquitous. Lane 2 is where learning will be developed in ways authentic to the discipline and, where relevant, contemporary workplaces. Just as we conduct our teaching, research and other work with access to the tools to assist us, lane 2 assessments develop and drive learning that is authentic. Lane 2 assessments emphasise that the majority of assessment should be for and as learning, undertaken by students whenever and wherever they are studying.
Acknowledging that an assessment is ‘open’ or ‘not secured’ (i.e. not lane 1) is not fully merging into lane 2, although it is a step in the right direction. In these situations, we may want to call this ‘towards lane 2’.
Assessment of learning (lane 1) is necessary but is often under undertaken in less authentic contexts such as exam halls or in front of a panel of teachers. The power of generative AI tools makes securing assessments even harder, and more intrusive and artificial.
Lane 2 is primarily about learning the ‘stuff, skills, and soul’ of the unit, program, and discipline. As the TEQSA guidelines point out though, in a world where AI is ubiquitous this necessarily involves the use of AI. However, learning about and how to use AI is not the primary goal of lane 2 assessments.
Fundamentally, lane 2 is the assessment for and as learning. Our role as educators has always been to steer and support students in their approaches, choice of tools, and the processes to use to understand material, ideas, and theories, develop skills, and create knowledge. Earlier technological advances such as the written word, printing press, calculator, and internet have changed the way students learn and educators teach. The focus on generative AI for lane 2 assessment reflects their newness and power rather its purpose. Learning how to use AI productively and responsibly should be part of your course and assessment design simply because these tools now form part of the ways in which each discipline now operates. Where the tools are not relevant or prevents learning, our role as educators is to help students know that is the better choice.
The emergence of generative AI changes the ways in which students learn disciplinary knowledge and skills, just as earlier technological advances have done. We will continue to set tasks to help students learn – from fundamental facts, skills and concepts to critical thinking processes and knowledge creation. Just as before, we need to securely evaluate that students have learnt how to do this at key points in their program, via lane 1 assessments.
As covered in the previous answer, generative AI will and has already changed the ways in which we work and the nature of disciplinary skills and knowledge. As a new technology, there is a period of adjustment and educators have a role in informing and helping students use it. However, as the tools become more embedded in our everyday tools and the ways in which we research and create knowledge, our need to narrowly help students use AI will be replaced by how we help students select and use the appropriate tools for our discipline. Equipping our students with evaluative judgement and understanding of how our discipline generates, tests, and creates new knowledge with all of the tools at their disposal are still the most important attributes for our graduates.
There are different purposes of assessment. Assessment for learning is where students learn through the process of completing assessments. The product or demonstration of knowledge or skill are not as important here. Assessment of learning is where we assure that students have attained the learning outcomes specified. Demonstration of knowledge or skill is essential here. The former align with Open (Lane 2) assessments, while the latter align with Secure (Lane 1) assessments.
Not quite. Lane 1 is where students’ attainment of learning outcomes is checked in a secure environment. This may or may not involve the use of AI. In some situations, it will be perfectly legitimate and realistic that program learning outcomes need to include the productive and responsible use of AI. An example we use often is architecture: image-based generative AI tools are already used in the industry to enable more productive ideation, so it follows that architecture programs should help students engage with these tools, and therefore learning outcomes would be aligned to this. A lane 1 assessment (for example, an interactive oral assessment) where students are in an authentic 1:1 situation (perhaps a mock client-architect interaction) could be perfectly valid. Many lane 1 assessments, though, are likely to prohibit the use of AI in secured environments.
It’s both. Open (Lane 2) assessments serve multiple purposes. Primarily their role is to help students develop disciplinary knowledge, skills, and dispositions. In an environment where AI is ubiquitous, AI has a key role to play in helping with this. Our role is to help students learn how they can productively and responsibly engage with AI – part of this will be through Open (Lane 2) assessments.
Assessment and curriculum design
To avoid confusion, the term “programmatic assessment” is deliberately not used in the assessment principles in the Coursework Policy or Assessment Procedures as it is associated with existing and extensive usage, in medical education in particular, and a detailed area of scholarship with. For our programs which are competency based, such as dentistry, medicine and veterinary science, a fully programmatic approach may be warranted. In most of our degrees, though, it is not. Program-level assessment and programmatic assessment should be viewed as mutually inclusive and beneficial. As detailed by Baartman et al., (2022) assessment design choices at the program level requires educator to make “diverse design choices … fitting their own context”.
The assessment principles in our Coursework Policy follow the TEQSA ‘Assessment reform for the age of artificial intelligence’ and the Group of Eight principles on the use of generative artificial intelligence in instead referring to program-level assessment. These all in turn follow from the Higher Education Standards Framework and from accrediting bodies that require demonstration of learning across a course.
Assessment principle 5 in the Coursework Policy states that “Assessment practices must be integrated into program design”. This principle requires that:
- assessment and feedback are integrated to support learning across units of study, courses, course components and year groups;
- assessment and feedback are designed to supports each student’s development of knowledge, skills and qualities from enrolment to graduation;
- students’ learning attainment can be validated across their program at relevant progression points and before graduation;
Program-level assessment design thus requires alignment of assessment across units (a), with deliberate consideration of individual development and validation at relevant points in the course as well as before graduation.
Because of the effect of generative AI of many existing types, Principle 6 also requires this is performed in a “trustworthy way” with “supervised assessments are designed to assure learning in a program”.
“Program” here includes both courses and their components, such as majors and specialisations. The requirements of Principle 5 though mean that validation at regular points must occur. In degrees like the BSc, BLAS, BA and BCom, this could mean once a student has completed the degree core, the first year or the second year. The points at which it is most useful to validate learning, through lane 1 assessment, are best decided by the course and component coordinators.
It is recommended that secured assessments be distributed throughout a student’s journey, not just at the end before graduation. If the structure of the course or the component is suitable, this could integrate knowledge across the program, as required by many program level learning outcomes. Equally it could do so by assessing the program level outcomes applied in different contexts in more than one unit.
However, as noted above, assessment principle 5 in the Coursework Policy requires consideration of each student’s development and validation at relevant progression points. One model could be a lane 1 hurdle task at the end of each year, progressing from an oral discussion to a final exam with proper support for students in terms of practice, access to multiple attempts, adjustments etc.
Over time, the assessment principles will drive the integration and alignment of unit outcomes and assessment more deliberately. This will ensure students build contemporary capabilities and disciplinary knowledge in structured ways and assurance/assessment of learning does not displace assessment for learning.
Program-level design can ensure our courses have the required integrity and offer reduced work and stress for individual coordinators and students. A number of faculties are already looking at their learning outcomes at the appropriate level that makes sense for the discipline – for postgraduate degrees this will commonly be course level and for undergraduate degrees it will commonly be the appropriate component such as major. These disciplinary level conversations will highlight the key moments at each stage of a degree to plan and place Secure (Lane 1) assessments.
Aligning to the two-lane approach is an ideal time to ensure constructive alignment to program-level outcomes, including reducing assessment, planning the places where Secure (Lane 1) assessment will be most effective and in considering the impact of generative AI on our disciplines and the ways in which knowledge is learnt and constructed. It is noted too that this alignment may also require curriculum changes.
Although we are aiming to align to the new assessment categories and types from Semester 2 2025, it is expected that more profound changes to the assessments themselves and the curriculum will occur over a longer timeframe of 1-2 years.
The article “Program level assessment design and the two-lane approach” provides literature, additional rationale and examples for how this could be achieved within our liberal studies degrees. Many universities are grappling with the same issues which may require curriculum changes, reduction in flexibility if desirable and thinking beyond units of study. It should be noted that the extent of modularisation and flexibility we presently have is relatively new and many of us ourselves experienced program-level assessments.
Masters level courses commonly do not have embedded components such as majors and flexibility as our liberal studies undergraduate degrees. With somewhat simpler curricula, the program level design will be different and perhaps easier to implement.
No. Even in the short time since the release of ChatGPT (originally running off GPT-3.5 which has now been superseded by a number of newer AI models), it is already not possible to restrict AI use in assessments which are not secured in face-to-face environments. It is also not possible to reliably or equitably detect that it has been used, either because a student has access to the latest technology, because they know how to change the raw output, or can afford to pay someone to do this for them. Any unenforceable restriction damages assessment validity so a scale or traffic light approach of telling students that they can only use AI for certain purposes, or use certain AI tools, is untenable. A clearer and more realistic approach is to consider the use of AI in lane 2 assessments as a menu, where students can pick anything and it is our role as educators to guide them which options are more delectable (better for learning).
We do though need to assure our students meet the learning outcomes of their programs and we need to do that through appropriately secured lane 1 assessments.
This option is not available in the options for Semester 1. There is no way to reliably restrict or select which, if any, generative AI tools students should use for a given task except in Secure (Lane 1) environments. Rather, it is part of our role as educators to curate, suggest and scaffold our resources and activities to help students make the most appropriate choices – just as we have always done in the past, for example, in helping students use the literature.
In the two-lane approach, there is no “amber” traffic light and AI use can only be restricted when we can do so securely and reliably. In Open (Lane 2) assessments, there is only a “green” light and the option of which tool to use for which activity becomes more like a menu with the educator helping to guide but not control the choice.
If using and selecting the right AI tool is part of the assessment and the program learning outcomes, then it would need to be part of a Secure (Lane 1) activity, where “restricted AI” is an option.
The drop-down item ‘unsecured assessment – no AI’ being available for Semester 1, 2025 is only a temporary measure, noting that it is actually not possible to enforce this.
In the two-lane approach, there is no “amber” traffic light and AI use can only be restricted when we can do so securely and reliably. In Open (Lane 2) assessments, there is only a “green” light and the option of which tool to use for which activity becomes more like a menu with the educator helping to guide but not control the choice. It is definitely part of the educator’s role to recommend certain uses of AI but not something an educator can meaningfully control.
If using and selecting the right AI tool is part of the assessment and the program learning outcomes, then it would need to be part of a Secure (Lane 1) activity, where “restricted AI” is an option.
Selecting which tool to use for a task, using it well to perform that task, and evaluating the final output requires skill, disciplinary knowledge, and evaluative judgement. These skills are core to program-level outcomes and must be developed in the units of study that form its curriculum. Educators have always given students a choice of resources and tools, and explained their strengths and weaknesses. In recent years, students may have become used to thinking of units as being separate entities with their role in developing program-level outcomes unclear. It is important that students realise through our conversations with them and through our program and assessment design that their skills and knowledge will ultimately be tested with controlled, or no, access to generative AI in lane 1 assessments.
Crudely – if students choose to bypass learning in their assessment for learning, they will fail the assessments of that learning.
This guidance should form part of the scaffolding that the educator provides to students, more like a menu with the educator helping to guide but not control the choice the students make. Just as we have always provided students with help in searching for literature, part of the learning activities and assessment description will no doubt include guidance on how to approach the assessment most effectively.
The answer to this question is in part pragmatic and in part about the purpose of assessment in education.
Generative AI tools are an intimate part of our productivity and communication tools and their use cannot be reliably detected. Generative AI is already being added to ‘wearables’ such as glasses, pins and hearing assistance aids. It is not only sci-fi writers who are beginning to think of a future where AI implants are available. Genuinely securing assessment is already expensive, intrusive, and inconvenient for the assessed and assessor. Crudely – there is a high workload associated with lane 1 assessments and this will only grow over time.
Assessment drives and controls behaviours that are critical to the process of learning. Each of our programs needs to include an appropriate mix of assessment for learning and assessment of learning, noting that the former includes using generative AI productively and responsibly in the context of the discipline. For programs thought of as a series of core, elective, and capstone units of study, this may mean that (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.
As a University and sector, we need to discuss the details of how the two-lane approach will apply to each program. We may decide on policies and approaches that apply to all, or we may be required to do this by external accrediting agencies. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.
As we wrestle with and adapt to a new approach to assessment, one heuristic may be the amount of effort that students need to apply to particular assessment tasks. Presumably these tasks will be meaningful and lead to deeper learning, i.e. assessment for and as learning. It follows that you likely want to more heavily weight lane 2 assessments.
As discussed in the previous answer, within a program: (i) some units have only lane 1 assessments, (ii) others have only lane 2 assessments, and (iii) some have a mixture.
The Open Learning Environment, for example, contributes to broadening the knowledge and skills of students in our liberal studies degree but does not have specified learning outcomes. Conversations are underway whether the 2 credit point units should be pass/fail with only lane 2 assessments.
In other components of degrees, such as minors and majors, program leaders are thinking about the most appropriate mix. It might be that, for example, the learning outcomes for a major should all be assessed through lane 1 assessments in a capstone or through assessments that sit as extra to the contributing units of study.
In the context of the TEQSA guidelines for assessment reform, it’s important to consider that these “trustworthy judgements about student learning in a time of AI requires multiple, inclusive, and contextualised approaches to assessment” (emphasis added).
Lane 1 assessments assure the learning outcomes of the program (e.g. degree, major, specialisation or perhaps even level of study) rather than of the individual units, following the Higher Education Standards Framework (Threshold Standards) 2021 legislation which emphasises outcomes at a course level. These learning outcomes are available for most degree components and published in the Handbook. Many programs, particularly those with external accreditation, also map and align the outcomes of their units to these. The move to the two-lane approach to assessment may require some rethinking of how our curriculum fits together, rewriting of these program level outcomes, and even reconsideration of whether we only assess in individual units of study.
We know that marks drive student behaviour and promote effort. If we want students to effortfully engage in lane 2 assessments, we should recognise this by providing marks towards these assessment activities. Since assessment can engage the process of learning, aligning activities to lane 2 assessment is crucial even to the extent that they may become effectively the same in our heads and in those of our students. It may be appropriate that the assessments and even the units become about satisfying requirements rather than for marks (i.e. pass/fail).
Units with only lane 2 assessments contribute to the learning outcomes of the program. Lane 2 assessments for learning drive and control the processes by which student develop disciplinary and generic knowledge and skills, including but not limited to the ability to use generative AI productively and responsibly. Units with only lane 2 assessments may have specific roles in a program – e.g. developing particular skills or dispositions – which students also use later or elsewhere in units with lane 1 assessments. In a STEM program, for example, a series of units might develop experimental and analytical skills through lane 2 assessments which are then assured through a capstone research project unit with an oral exam and supervised skills assessments rather like Honours projects and the PhD. In a humanities program, students might be exposed to different methods for critical analysis of texts in lane 2-only units, with a later capstone unit requiring a writing task assured through a series of supervised short tasks and an interactive oral assessment.
Large cohort units tend to occur in the initial years of our programs when students need to acquire foundational knowledge and skills. The change to the two-lane approach to assessment is a good opportunity to think about how these units contribute to the outcomes of the programs in which they sit. For example, knowledge of the fundamental physical and biological sciences and experimental approaches is important across many disciplines across STEM but these could be assessed without a requirement for grades.
At the moment, many of these units already have lane 1 assessments in the form of highly weighted exams. As the foundational knowledge and skills in these programs also includes how to select and use appropriate learning tools and resources, these units will also need to include lane 2 assessments. Accordingly, it will be important for unit and program coordinators to consider reducing the weighting of lane 1 assessments in these units and considering, for example, the use of a year-level lane 1 assessment instead of many lane 1 assessments within each unit.
This is already possible under the existing assessment types and categories in the Assessment Procedures. In the new categories and types proposed for Semester 2, this would be achieved by having an Open (Lane 2) written assessment followed by a Secure (Lane 1) oral assessment. The latter could be selected to be a hurdle task.
Yes, this is already possible under the existing assessment types and categories in the Assessment Procedures. In the new categories and types proposed for Semester 2, this would be achieved by having an Open (Lane 2) written assessment followed by a Secure (Lane 1) oral assessment. The latter could be selected to be a hurdle task.
The proposed assessment categories and types for Semester 2 2025 include “Presentation” as an Open (Lane 2) assessment and “Q&A following presentation, submission or placement” as a Secure (Lane 1) assessment. In this model, students are able to use AI in the production of the slides and script and to assist with the presentation itself (or even to use an avatar to make a video presentation). The quality of the product they produce and the process they use form the Open (Lane 2) assessment. In class Q&A can be used to assure that they have understood the material presented.
This would be best made a single Secure (Lane 1) assessment or a paired Open (Lane 2) and Secure (Lane 1) assessments. Given the different special consideration and academic adjustments that would be needed for the formative and summative parts, the latter would probably be more suitable for most cases. To reduce confusion, you amy consider naming the two assessments with similar phrasing.
The emergence of powerful generative AI tools has already started to change the ways in which we work and in which disciplines create and extend knowledge. It makes sense that program level outcomes, which describe what our graduates know and can do, are likely to change. Law practices, media companies, banks, and other industries are already requiring graduates to be skilled in the effective use of generative AI tools. Our PhD students will similarly need to be equipped to perform research and create knowledge using current tools and to adapt quickly to new ones.
Alongside rethinking program level outcomes to reflect these changes in technology, the two-lane approach requires detailed reconsideration of how unit-level outcomes are written, aligned, and mapped to the higher level ones.
Additionally, the rapidly expanding abilities of AI are an important moment for higher education to reconsider what it is that we want students to learn. For a long time, higher education has focused on the acquisition and reproduction of content knowledge, only more recently reprioritising towards skills. It’s possible that a more drastic reconsideration of learning outcomes is in order, in a world with this capable co-intelligence.
Our graduate qualities already include “Information and digital literacy”. This is defined as “the ability to locate, interpret, evaluate, manage, adapt, integrate, create and convey information using appropriate resources, tools and strategies.”. Current thinking that this together with “Critical thinking and problem solving” and an “integrated professional, ethical and personal identity” reflects our desire for our students to be effective and ethical leaders in the use of AI is sufficient.
Most faculties will be looking to refresh and realign their assessment and course level outcomes (CLOs) over the next couple of years. As noted in the question, it is important that there are multiple points at which CLOs are assessed, and we may consider introducing “milestone” and “stage gate” points to provide feedback on progression.
Guidelines on the number of CLOs, related assessments and how such progression points might work will be developed with program and course directors
Sydney currently has relatively few pass/fail units, but there is some evidence from here and elsewhere that this grading scheme is beneficial particularly where learning outcomes are predominantly focused on skills and competencies. Lane 2 assessments focus attention of assessment for learning, including acquiring skills and critical thinking processes, rather than grading the result of such tasks. It may make sense for such assessments to be pass/fail only with no other mark applied.
We need to discuss as a University and sector, the details of how the two-lane approach will apply to each program. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.
Open (Lane 2) assessments focus attention of assessment for learning, including acquiring skills and critical thinking processes, rather than grading the outputs of such tasks. In some circumstances, it may make sense for such assessments to be pass/fail only with no other mark applied.
We need to discuss as a University and sector, the details of how the two-lane approach will apply to each program. Early discussions suggest that some disciplines are considering whether grading only should only apply to lane 1 assessments, which may become ‘hurdle’ (must pass) tasks, whilst lane 2 assessments are pass/fail.
The proposed Secure (Lane 1) assessment category “In person – secured” contains a number of assessment types which could be run in tutorial or laboratory settings. The currently proposed assessment types in this category include “In person practical, performance or creative task”, “In person written or creative task” and “Q&A following presentation, submission or placement”. These types are intended to align with the kinds of activities that are most reliable and equitable for measuring and assuring learning.
Assessment validity
For semester 2 2024, our Academic Integrity Policy has not been changed, so coordinators still need to state their permission for students to use generative AI. By January 2025, the University aims to shift its policy setting to assume the use of AI in ‘open’ or ‘unsecured’ assessment – meaning that coordinators will not be able to prohibit its use in such assessment.
As noted above under ‘Assessment and curriculum design‘, it is already not possible to restrict or detect AI use in assessments which are not secured by face-to-face supervision, and it is not possible to reliably or equitably detect that it has been used. Any unenforceable restriction such as stating that AI is not allowed damages assessment validity and is untenable.
With AI tools becoming part of everyday productivity tools we all use, including the availability of Microsoft’s Copilot for Web as part of the software package that all enrolled students receive for free, it is important for coordinators to discuss their expectations with students.
As noted above under ‘Assessment and curriculum design‘, lane 2 assessments can powerfully engage students in the process of developing disciplinary and generic knowledge and skills, including but not limited to the ability to select and use generative AI tools productively and responsibly. Whilst applying marks to such assessments is motivating for students, the focus of marking should be overwhelmingly on providing students students with feedback on both the product of the task and on the process which they used.
As noted above, it may make sense for such assessments to be pass/fail only with no other mark applied.
If a student uses generative AI to complete an Open (Lane 2) assessment, they may do so brilliantly, well or poorly – the “product” produced such as the written report, video or presentation should be marked accordingly. However, the process they use to produce is likely to be more important since it is this that students will need feedback on to ensure they can improve and subsequently pass Secure (Lane 1) assessments and perform tasks in their careers.
Rubrics which combine assessment of both the product and the process will need to be developed which reflect this balance and provide students with feedback.
We would advise against removing weighting from Open (Lane 2) assessments, as marks motivate student behaviour.
Open (Lane 2) assessments can powerfully engage students in the process of developing disciplinary and generic knowledge and skills. Applying marks to such assessments is motivating for students, and the focus of marking should be overwhelmingly on providing students with feedback on the process which they used. This might include how well a student has used generative AI tools and the critical thinking that has been applied.
In certain circumstances, it may make sense for such assessments to be pass/fail only. However, we recommend that Open (Lane 2) assessments are still worth marks..
Plugins exist for all of the common browsers which give students undetectable tools that leverage generative AI to answer multiple choice and short answer questions in Canvas and other assessment systems. Testing internationally suggests that these generative AI tools are capable of at least passing such assessments, often obtaining high marks and outscoring humans even on sophisticated tests.
Any assessment which is not supervised in a secure face-to-face environment is ‘open’ or ‘unsecured’ and hence a de facto (unscaffolded) “towards lane 2” assessment. This includes online quizzes, take-home assignments, and orals or presentations held using web-conferencing software.
Because collusion or plagiarism are not graduate qualities we wish to develop. The productive and responsible use of generative AI will be a key set of knowledge, skills, and dispositions that our graduates will need, much like our graduates need the knowledge, skills, and dispositions to use the internet, to work with others, to influence, and to lead.
It is untenable for us to attempt to restrict how students will use generative AI unless they are supervised in a Secure (Lane 1) assessment context. Students will be able to complete assessments using AI without our knowledge and those with access to more sophisticated tools or who have the skills will be able to do so without us being able to detect it. If we say to students that only certain uses of AI are permitted, but we have no way of detecting or knowing this, the validity of that assessment (i.e. what it is measuring from student to student) is questionable.
It is important that we teach students how to use AI tools effectively and ethically – including how to prompt, how to improve the output and how to apply critical thinking to assess the quality of the work produced. Therefore, the language we use in Open (Lane 2) assessments needs to shift from what uses of AI are ‘allowed’ or ‘permitted’, to what uses of AI are most ‘helpful’.
No. The shift to the two-lane approach requires a cultural shift for both staff and students, and we should work together to ensure that students receive the required communications and feedback particularly if we move towards program-level approaches Secure (Lane 1) assessments including appropriately placed hurdle tasks.
For students who are recent school leavers, the assessment regime is not so different from that in high schools. For the HSC, for example, the mark in each subject is a combination of school-based assessment marks (many of which, arguably, are now not secure) with a summative Secure (Lane 1) type assessment. Students are aware that their mark in the latter requires them to work hard during the year at the former to gain the necessary disciplinary knowledge, skills, and dispositions.
Students should not pass our degrees if they have not met the course learning outcomes. However, in the two-lane approach, a take home assignment would be an Open (Lane 2) assessment in which the use of generative AI is scaffolded and not artificially banned. Our ability to detect the use of AI is just as unreliable as that of detection software and evidence suggests that it is only poor use that is reliably detected.
If it is important that the student complete written work without access to generative AI tools, then the task must be secured or become part of a scaffolded sequence of tasks which includes a Secure (Lane 1) assessments to secure that learning outcome.
Academic integrity, contract cheating, and AI detection
Because using contract cheating companies is not a graduate quality we wish to imbue, whereas productively and responsible use of AI is. Open (Lane 2) assessments are for learning – including helping students master using contemporary technologies including generative AI in preparation for their careers. The students remain responsible for the quality of submitted work and in ensuring that they learn through the tasks.
This is very different to hiring a contract cheating company to complete an assessment for them.
See the answer directly above for why contract cheating is very different to the scaffolded use of generative AI in Open (Lane 2) assessments.
We remain committed to protecting our assessments from contract cheating / ghost writers. Use of such approaches by students will remain a serious integrity breach. The two-lane approach also ensures that the use of such approaches will not lead to students gaining degrees because they will not pass the Secure (Lane 1) assessments in their programs.
See the answer directly above for why contract cheating is very different to the scaffolded use of generative AI in Open (Lane 2) assessments.
We remain committed to protecting our assessments from contract cheating / ghost writers. Use of such approaches by students will remain a serious integrity breach. The two-lane approach also ensures that the use of such approaches will not lead to students gaining degrees because they will not pass the Secure (Lane 1) assessments in their programs.
See the answer above for why contract cheating is very different to the scaffolded use of generative AI in Open (Lane 2) assessments.
With generative AI tools such as Microsoft Copilot available to students, it is hoped that the business model for contract cheating companies is lost, noting that many of these companies now use generative AI with poor quality prompting.
Research has consistently shown (e.g. Scarfe et al (2024) and Fleckenstein et al (2024)) that humans cannot accurately detect good AI use. There are also adversarial techniques that can be used to subvert AI detection algorithms. In this environment, secure (Lane 1) assessments are needed to assure learning, along with Open (Lane 2) assessments to help students use AI productively and responsibly.
AI detection software is used by the University as part of investigations of academic integrity breaches. However, such software is unreliable on its own and there are tools as well as prompting approaches on social media to avoid detection. Our stance is that the two-lane approach is the only reliable, tenable,and equitable way to protect the integrity of our degrees and to teach our students to use AI effectively and ethically.
Similarity detection software will continue to be a powerful tool for detecting plagiarism. As noted in the two answers directly above, detecting the use of generative AI is much harder and less reliable as the text it produces to an assessment prompt is effectively different each time it is run and, with the appropriate paid-for tool or skill, a user can produce text which the detection software may not identify as machine written.
The philosophy behind the two-lane approach is that acknowledging the use of generative AI and using it appropriately in Open (Lane 2) assessments or it is allowed in Secure (Lane 2) assessments is not cheating.
The detection software itself uses AI-based approaches. It is continually being developed but is always going to be behind the development of the generative AI models: as each new model is produced the detection software developers must catch up. It is effectively an “arms race” in which those who can afford the latest models will remain undetected.
We belief that the two-lane approach provides integrity for our degrees in the Secure (Lane 1) assessments and is the only defensible and future proofed approach. Relying on detection software will never be reliable and is likely only to “catch” those who use the tools poorly or who cannot afford the latest AI models.
Secure (Lane 1) assessments
The two-lane approach highlights the key role of Secure (Lane 1) assessments in assuring learning at the program-level. This should lead to the planning and placing of written and face-to-face oral exams at appropriate places in courses, as well as greater weighting to other in-person assessments such as in laboratories and on placements.
The proposed Secure (Lane 1) assessment categories and types reflect current thinking on the assessments that will be useful for assuring learning at the program-level. Many of these reflect traditional secure assessment models like exams and orals, as well as greater use of in person practical, performance and creative tasks and of placements and workplace settings.
In 2024, around 80% of our units (corresponding to just under 60% of unit enrolments) did not have exams. Adding such Secure (Lane 1) assessments to more units would require considerable time in the academic calendar, resources and additional workload for academics.
As noted in other responses, it is important to consider the planning and placing of such assessments within the program and course.
The two-lane approach highlights the key role of Secure (Lane 1) assessments in assuring learning at the program-level.fe
Assessments that work beyond individual units, with program-level design, can ensure our courses have the required integrity without increasing the overall number of exams. In 2024, around 80% of our units (corresponding to just under 60% of unit enrolments) did not have exams. Adding such Secure (Lane 1) assessments to more units would require considerable resources and additional workload for academics.
The two-lane approach highlights the key role of Secure (Lane 1) assessments in assuring learning at the program-level. As noted in the question, securing assessments will become harder and harder as technologies such as wearables become both more powerful and smaller. This is one reason why it may become more desirable to consider Secure (Lane 1) assessments beyond individual units of study. The Exams Office currently trains invigilators to spot the use of such devices and we would expect this to become more important and, yes, potentially more intrusive.
There are many issues with closed book exams including inauthenticity, inequity, and an inability to measure key graduate qualities. If we do nothing, we risk becoming irrelevant because not only will we not support and scaffold students’ productive and responsible use of AI, but we will also not be able to assure learning using existing assessments because students are likely to be using AI anyway.
The Exams Office supports exams run in the formal exam period and intensive sessions. In 2024, around 80% of our units (corresponding to just under 60% of unit enrolments) did not have exams. Adding such Secure (Lane 1) assessments to more units would require considerable resources and additional workload for academics. Assessments that work beyond individual units, with program-level design, can ensure our courses have the required integrity and offer reduced work and stress for individual coordinators and students.
We will look at the feasibility of providing support and training for those supervising other Secure (Lane 1) assessments.
Interactive oral assessments are authentic, scenario-based conversations where students can extend and synthesise their knowledge to demonstrate and apply concepts. They are a good example of Secure (Lane 1) assessments which are not formal examinations. More details are available on Teaching@Sydney of some practical ways are getting started with these, including dealing with scale and equity.
Although using these assessments at scale requires significant planning, there are distinct advantages in terms of reduced marking load and reduced workload for replacement assessments. For example, educators who have implemented interactive oral assessments consistently note that a 15 minute conversation allows them to explore student understanding in a much deeper way than reading a written report, and providing feedback is also much faster.
Interactive oral assessments were also developed in 2024 through Strategic Education Grants and more resources and examples will be published soon. Educational Innovation is planning to start a community of practice at Sydney to support educators who wish to trial interactive oral assessments in 2025 and beyond.
Formal oral exams, as with other Secure (Lane 1) assessment should be planned and placed for assuring learning at the program-level. This might not, for example, include large, first year units of study.
In many cases, it will be natural for Secure (Lane 1) assessments to become hurdle tasks – such as when they assure learning at the program-level. Such hurdle tasks may become part of “milestone” and “stage gate” points in courses and so it is important that a program level approach is considered.
Please contact your Associate Dean (Education) for local guidelines on assessment changes. Hurdle tasks are available in our Coursework Policy and Assessment Procedures and are likely to form a valuable tool when used as part of a program-level approach to Secure (Lane 1) assessment.
Hurdle tasks, in the form of an examination with a “specified minimum pass threshold” are also recommended as an integrity measure in the Academic Integrity Procedures.
Hurdle tasks are already commonly used for many placements, and a lot of skills based assessments.
Using generative AI for marking and feedback
Guidelines on the use of generative AI for marking and feedback are available, noting that our position is that the final mark and feedback are the responsibility of the marker. Nonetheless, educators at Sydney are already using generative AI Cogniti agents to help enhance actionable feedback for both students pre-submission and for markers.
In our response to TEQSA’s request for information on our approach to generative AI, we listed transparency as fundamental to ensuring trust in the University community of staff and students. The use of generative AI by educators should be acknowledged just as we require students to do. This is also in line with the Federal Government’s Artificial Intelligence Ethics Principles.
Generative AI tools can reduce workload in many areas of teaching, such as drafting rubrics and multiple choice questions with feedback, producing teaching resources and suggesting analogies. Cogniti offers the possibility for educators to build ‘doubles’ of themselves by making and controlling their own AI agents.
Guidelines on the use of generative AI for marking and feedback are available, noting that our position is that the final mark and feedback are the responsibility of the marker.
The skills and academic knowledge needed to mark the accuracy of submitted work are not changed by the use of generative AI tools in Open (Lane 2) assessments. Students are responsible for the accuracy of the work submitted and should be marked accordingly.
Guidelines on the use of generative AI for marking and feedback are available, noting that our position is that the final mark and feedback are the responsibility of the human marker. Nonetheless, educators at Sydney are already using generative AI Cogniti agents to help improve the quality of actionable feedback for both students pre-submission and for markers.
The introduction of scaffolded, Open (Lane 2) assessments in which students acknowledge their use of generative AI is expected to greatly reduce the number of integrity cases.
The training referred to above is available to all academics, continuing and sessional.
Large language models (LLMs) attempt to mimic human output when prompted to do so. As they are not databases of knowledge, a response generated by AI may contain false or misleading information presented confidently. This is known as “hallucination”.
As noted in the previous answer, the skills and academic knowledge needed to mark the accuracy of submitted work are not changed by the use of generative AI tools in Open (Lane 2) assessments.
Policy changes for Semester 1 and planned changes for Semester 2
For Semester 1, the only change is in that the default academic integrity setting* will move from generative AI being prohibited except when expressly allowed by a coordinator to being allowed unless expressly prohibited by the coordinator.
* For examinations and in-semester tests, the default will continue to be that AI is prohibited unless expressly allowed.
If a coordinator makes no changes to the assessment settings in Sydney Curriculum, then an “AI allowed” icon will appear in the assessment table in the unit outline for all assessments except examinations and in-semester tests. A coordinator can change this using the new drop down menu in Sydney Curriculum.
For the larger changes ahead of Semester 2, support will be available for both the system changes and for the assessment changes – see the previous question.
For semester 2 2024, our Academic Integrity Policy has not been changed, so coordinators still need to state their permission for students to use generative AI. By January 2025, the University aims to shift its policy setting to assume the use of AI in ‘open’ or ‘unsecured’ assessment – meaning that coordinators will not be able to prohibit its use in such assessment.
As noted above under ‘Assessment and curriculum design‘, it is already not possible to restrict or detect AI use in assessments which are not secured by face-to-face supervision, and it is not possible to reliably or equitably detect that it has been used. Any unenforceable restriction such as stating that AI is not allowed damages assessment validity and is untenable.
With AI tools becoming part of everyday productivity tools we all use, including the availability of Microsoft’s Copilot for Web as part of the software package that all enrolled students receive for free, it is important for coordinators to discuss their expectations with students.
No changes are needed for the ‘Academic Integrity’ and ‘Compliance’ statements for Canvas assignments for Semester 1 2025. If changes are made for Semester 2 2025, these will be provided on the Artificial Intelligence and Assessment page on the intranet.
In principle, all of the proposed new assessment types for Semester 2 will be available for both individual and group work.
In principle, all of the proposed new assessment types for Semester 2 will be available for both individual and group work.
A quick guide for staff about choosing the correct assessment category and type is available. This guide and the helptext in Sydney Curriculum has been updated to align with the new integrity settings. It will be further updated ahead of Semester 2, with the new assessment labels.
Remote assessments such as proctored written exams and online oral exams will become increasingly difficult to secure as generative AI software and associated wearables become more commonplace. Currently, we are proposing that such assessment is accompanied by a risk assessment and will require appropriate approval.
For online courses, program level assessment design is therefore paramount to maximise the assurance of learning in secure settings. For example, online postgraduate courses should make use of opportunities for in-person experiential activities including on placements and in intensive sessions, where secure assessments may also be placed. The use of capstone units should also be considered with in-person assessments using exam centres, agreements with other providers etc.
See answer to previous questions.
In-person assessments using exam centres, placements and agreements with other providers will no doubt form part of this, as we used to do pre-2019.
Although personalised academic plans will always be available to students, there is an opportunity to universally design Open (Lane 2) assessments so that adjustments are not needed for most students. For example, online quizzes become opportunities for students to practice or apply knowledge. The need to set short release times for such quizzes is removed and repeat attempts can be made available.
The two-lane approach highlights the key role of Secure (Lane 1) assessments in assuring learning at the program-level. As these assessments will be less common but perhaps hurdle tasks, adjustments would need to be personalised for students.
The special consideration matrix in the Assessment Procedures will need to be redeveloped for both lanes. Consultation and the required policy changes are due to occur in early 2025 ahead of operationalising for Semester 2 2025. As noted in (53), this may require some re-thinking of the most effective way of supporting students.
The change in the role and likely weighting of submitted assignments is such that a conversation about the function of simple extension is planned. Similarly, the importance of scaffolding the production of extended writing over sequenced tasks – e.g. developing a research question, performing a literature search, producing a draft and final artefact – mean that the role of anonymous marking may need to be discussed for some written assessments.
The role of special consideration is to promote equity and inclusion – ensuring that all students have an equitable chance to complete assessment irrespective of their circumstances. This is built into the Assessment Principles in the Coursework Policy and Assessment Procedures. Simple extensions are part of this. As the assessment types changes, we will need to examine how best to support students who are ill or suffer other misfortune.
It may be noted that program-level design can also help reduce special consideration load through planning of deadlines and tasks across units.
Secure (Lane 1) assessment should be planned and placed for assuring learning at the program-level. Special consideration and academic adjustments will remain important for these assessments. However, the Secure (Lane 1) assessments will need to be supervised in person.
The early feedback task is a great example of an assessment for learning where the main goal is to provide educators and students with feedback on their understanding at a key milestone of a unit. The early feedback task is ideally suited to an Open (Lane 2) assessment such as an online multiple choice quiz or another assessment from the “Practice or application – open” category proposed for Semester 2.
Supporting students and staff
The shift to the two-lane approach will require rethinking of both assessment and curriculum. It will take at least a couple of years for us to embed the approach in all programs and longer still to perfect it. However, this approach is designed to be future proofed. It will not change as the technology improves, unlike approaches that suggest how to ‘AI-proof’ or ‘de-risk’ ‘open’ or ‘unsecured’ assessments. As this happens, lane 2 assessments will need to be updated and the security of lane 1 assessment will become more difficult but the roles of the two lanes will be maintained.
As noted in each section above, the two-lane approach needs to thought of at the program level. If done well, it will reduce staff and student workload through reduction in the volume of assessment and marking with most assessments in lane 2: fewer exam papers and take-home assignments to prepare and mark and more in-class assessment. The focus on lane 2 will also change the need for special consideration, simple extensions, and academic adjustments through the universal design of assessments and reduced emphasis on take-home assignments and grading.
In 2024, many of the Strategic Education Grants were awarded to projects focussed on assessments aligned to the two-lane approach including a substantial project of future models for the creative writing assignments. It is the intention to expand this scheme for 2025 with assessment redesign being the highest priority across the institution.
Through events such as the Sydney Teaching Symposium and the AI in Higher Education Symposium, and our national and international networks, we will continue to highlight exemplars from across the disciplines. Professional learning, workshops, resources, and consultations are available and will continue to develop through Educational Innovation and faculty teams.
We have worked with student partners to develop the AI in Education resource (publicly available at https://bit.ly/students-ai) that provides advice, guidance, and practical examples of how students might use generative AI to improve learning, engage with assessments, and progress in their careers. Staff have also found this site informative.
The AI × Assessment menu is the preferred approach to scaffolding student use of AI in lane 2 assessments. The menu approach emphasises that there are many generative AI tools, and many ways to use these tools, that can support student learning in the process of completing assessments. The AI in Education resource linked above is being updated so that its advice aligns with the assessment menu.
This process will take many years for the University to get right. For semester 1, 2025, you do not need to move to a state of program-level design. However, you should start having conversations with your program (course, major) teams to come up with program-level learning outcomes that are meaningful, contemporary, and authentic. Once these learning outcomes are determined, Secure (Lane 1) assessments at the program level should be designed. These may already exist within units, reducing the amount of redesign work needed.
The plan to align our assessment to the age of generative AI is designed to engage, involve and support coordinators. As detailed in the webinar, coordinators who leave the default option will be choosing to allow generative AI to be used in each assessment, except for exams and in-semester tests. This step is designed to engage all educators in conversations about the nature and purpose of the assessments in their units. Ahead of Semester 2 2025, the new assessment categories and types will require decisions to be made about each assessment. Support will be provided through 1:1 conversations as well as University and discipline-based workshops. As a community of educators, we each have an important role to play in helping colleagues understand the challenge and opportunities ahead.
We are working to curate examples of effective lane 2 assessments across the University to share more broadly to support colleagues in developing their own assessment designs. There is also a University-wide AI in Education community of practice that meets regularly to share updates and practices collegially. You can also stay up to date with key AI developments in our Teaching@Sydney coverage.
A true lane 2 assessment will involve students using AI to support their learning and develop critical AI literacies. Fundamentally, a lane 2 assessment should aim to develop and assess disciplinary knowledge, skills, and dispositions, as any good assessment would. You may wish to focus lane 2 assessment rubrics on higher order skills such as application, analysis, evaluation, and creation. You may also wish to include how students are productively and responsibly engaging with generative AI in a disciplinary-relevant manner. We have rubric suggestions in appendix 2 and 3 of our advice guide (which says 2023 but still remains current).
State-of-the-art generative AI tools are not cheap, and most are only available on paid subscription plans that many students will not be able to afford. This access gap, coupled with the variable engagement with generative AI between independent and government high schools, means that students have varied access to and abilities with AI. The AI in Education site (publicly available at https://bit.ly/students-ai) aims to help address this divide, as does the institutional access to powerful generative AI tools such as Copilot for Web and Cogniti.
For educators, Educational Innovation run a number of workshops on the essentials of generative AI, including prompting, and on using Cogniti, a tool for teachers to build their own AI agents. Many of our Modular Professional Learning Framework (MPLF) modules now include uses of generative AI in teaching, learning and assessment. In 2025, we will be offering additional workshops and consultations on AI and assessment.
For students, the Library and Learning Hub offer workshops for coursework and research students. The ‘AI in Education’ site, co-designed and built with students, provides ways for students to use generative artificial intelligence productively and responsibly as part of their learning journey.
The Academic Honesty Education Module (AHEM), which all new students complete, is currently being updated with the new integrity settings on generative AI.
A mandatory module like AHEM is very useful for ensuring students understand our policies and their responsibilities. However, developing the skills to use AI in their learning is more effectively achieved as part of activities in units of study. We are currently developing a short tutorial activity for embedding in the transition units, which are in the first semester of every undergraduate degree. This activity can be contextualised for the particularly disciplinary needs of each degree.
See previous answers for an overview of training and support.
Generative AI tools can reduce workload in many areas of teaching, such as producing rubrics, writing multiple choice questions with feedback, producing teaching resources and suggesting analogies. Cogniti offers the possibility for educators to build ‘doubles’ of themselves by making and controlling their own AI agents.
Assessments that work beyond individual units, through program-level design, can ensure our courses have the required integrity and also offer reduced work and stress for individual coordinators and students.
Sydney Curriculum (Akari) has now been updated to the new integrity settings for Semester 1 2025. Unit outlines completed before this update will be modified for the coordinator.
It is planned that academic developers and educational designers will provide consultations. The assessment categories and types for all units will need to be modified and there will be an opportunity for assessment revision and reform at unit and program level for Semester 2 2025 and for 2026. Some of the assessment changes will require approval through the normal faculty-based mechanisms.
All staff and students have access to Microsoft Copilot for the Web. There are no present plans to give access to the full version of Microsoft Copilot Pro.
A full list of the currently endorsed and supported generative AI tools is available on the intranet.
We have a growing collection of assessment exemplars on Teaching@Sydney and also are active in national and international networks which are similarly developing such resources. We will continue to grow this collection and welcome contributions from across the University.
Support will be available through workshops, consultations and 1:1 meetings with coordinators to map their existing assessments with the new categories and types ahead of Semester 2. In addition, support will be available to make the required changes in Sydney Curriculum.
As noted above, we will continue to share exemplars from across the University and from elsewhere as educators develop assessments aligned with the Secure (Lane 1) and Open (Lane 2) categories including those developed through the 2024 Strategic Education Grants.
TEQSA published a toolkit, “Gen AI strategies for Australian higher education: Emerging practice” on 28th November 2024. This is based on the information provided to TEQSA by higher education provider earlier in the year, including that submitted by the University. The University’s two lane approach is featured as a key example of practice in this toolkit.
In many assessments, a student’s choice of which technology to use in an Open (Lane 2) assessment might well include “none”. As generative AI technologies become part of everyday productivity tools, such a choice will be harder to make and probably less relevant.
In some disciplines such as architecture, the use of contemporary technologies is part of the course learning outcomes and it may not be possible for a student to meet these without using generative AI.
It is likely that in 3-5 year’s time, generative AI will feel much less ‘strange’ than it does now. For now, we should design assessments so that conscientious objectors are able to equitably achieve learning outcomes.
Ethics of education and AI
In many assessments, a student’s choice of which technology to use in an Open (Lane 2) assessment might well include “none”. As generative AI technologies become part of everyday productivity tools, such a choice will be harder to make and probably less relevant.
In some disciplines such as architecture, the use of contemporary technologies is part of the course learning outcomes and it may not be possible for a student to meet these without using generative AI.
It is likely that in 3-5 year’s time, generative AI will feel much less ‘strange’ than it does now. For now, we should design assessments so that conscientious objectors are able to equitably achieve learning outcomes.
When using generative AI tools, it is important to recognise the potential privacy and IP concerns. The Intranet has a list of approved and recommended AI tools, as well as guardrails to help you determine how to use AI tools safely. University-approved tools do not hand over IP for the training of generative AI models.
The Federal Government’s Artificial Intelligence Ethics Principles include a commitment that “AI systems should benefit individuals, society and the environment”. Although AI presently requires large amounts of energy and water, it also offers the possibility of increased efficiencies. It is important that our students are AI literate and have expertise in its effective and ethical use, including its environmental impact. As educators, part of our role is to discuss some of the negative effects that this technology could lead to so that our students, when they hold leadership positions in companies and governments make informed decisions.
Part of our role as educators has always been to provide students with the mindset and tools to consider the reliability and plausibility of material they are presented with. This has never been more important than now.
More generally, AI hallucinations are not something that the University can address; major AI labs are working on this.
The global conversation around generative AI has shifted in the last 6 months. Initially, there was mass panic around academic integrity and assessment security. More recently, the discourse has shifted towards a recognition that higher education needs to re-evaluate its value and purpose. This is a deeply philosophical and practical conversation that needs to happen across all areas of higher education.