Where are we with generative AI as semester 1 starts?

Generated with DALL-E 3 in ChatGPT Plus.

With students streaming back to campus as the Australian university year starts, educators are faced with an incoming cohort with widely variable experience with generative AI, and the ever-present challenge around academic integrity. What has happened in the AI space over the last few months that we need to know about going into 2024?

This is a slight expansion on an article that first appeared on Linkedin.

It’s about students’ futures

More reports are emerging about AI’s impact on the jobs and workers of today and tomorrow. For students, this will be their primary concern – not so much whether we allow or disallow AI in our classrooms and assignments. For students, this impact on their future working lives will also be a significant motivator to engage with generative AI now while they are studying with us.

A recent report from McKinsey highlights a “new landscape of human work” and suggests that 9% of Australia’s workforce may “need to transition out of their current roles into new occupations by 2030”. Although the headline might seem like a far-off hyperbole, AI is something we need to integrate right now into our university classrooms, with a focus on ‘human work’. A recent IMF report agrees, suggesting up to 40% of the global workforce (and even higher in more advanced economies) are exposed to AI.

Although 2023 was the year that society was taken by surprise with the mainstreaming of generative AI use from individuals, 2024 will likely see more uptake from organisations and sectors. Just a few examples from December 2023:

  • the UK Judicial Office issued official guidance to allow the use of AI in summarisation and administration
  • the ABC highlighted individuals such as writers and contractors who believe AI had replaced them
  • media outlets have suggested that AI assistance means reporters can reinvest time to perform the more human aspects of journalism
  • the Wall Street Journal wrote about how architecture, medicine, tourism, engineering, and more are adopting generative AI
  • Microsoft’s Future of Work report summarises many impacts of large language models on productivity, innovation, collaboration, and more
  • (and one from February 2024) the Burning Glass Institute report on impacts across an even wider range of professions, including legal, audit, finance, accounting, analysts, software, marketers, designers, HR, and more.

But not all students have the same opportunity

The most capable AI models (available through tools such as ChatGPT Plus, Claude, Gemini Advanced) usually cost USD20 per month to access. In Australian universities, incoming first year students from independent schools may have a year’s head-start on their public school counterparts because the latter only started allowing generative AI towards the end of 2023.

Therefore the digital divide with generative AI is due to both access and skill. Institutions have an imperative to provide their staff and students access to capable AI models – by this, I mean models like GPT-4 and not the free version of ChatGPT. The capability of GPT-4 for work in higher education far surpasses that of GPT-3.5.

A related equity issue is the old chestnut of AI detection. Research continues to emerge stating that, for example, “detection tools are neither accurate nor reliable“. In my mind, detection is actually an equity issue. Because AI detection algorithms tend to be trained on AI-written text, because AI models advance so quickly, and because the more advanced models usually cost money to access, typically it’s students who either don’t have access to more powerful AI models or those who don’t know how to use them (or both) who are likely to be caught out. It is much more equitable to provide even access to capable models to all students, helping them to use AI productively and responsibly, even as part of assessments.

AI may be a bit more capable than you think

Those who may have only used the free version of ChatGPT may, and rightly so, consider the outputs to be rather mediocre. If we’ve only tried our assignment prompts on free ChatGPT, we might even consider ourselves to have appropriately ‘AI-proofed’ our assessments. There are a few underlying considerations here:

  • ChatGPT is not the only tool available, by far. There are generative AI-enabled tools that use real literature, interpret and produce images, analyse data, and more.
  • GPT-3.5 (which powers the free version of ChatGPT) is much less capable than GPT-4.

Fundamentally, there is no way to ‘AI-proof’ a take-home assignment.

Sorry.

This is especially the case as generative AI continues to advance. You may like to skim this short video for a quick demonstration of the capabilities of some AI tools in the context of higher education. I made this video for the TEQSA conference in November 2023, and AI tools and capabilities have advanced even more since then.

You may also find (scarily?) informative the range of AI-powered tools that are being promoted by social media influencers, under such auspicious titles as “how to get As without listening to lecture”, “[AI tool] will provide real resources for you to cite”, and “download [AI tool] for your finals”.

Editor’s note: Just a day after this article was published, OpenAI has announced Sora, an AI model capable of making photorealistic videos up to 1 minute long – you need to see the videos yourself. Google has also announced the next version of Gemini, their most advanced AI, which now apparently can take in almost 700,000 words at once as well as handle multimodal inputs and outputs.

Our assessments need to be redesigned – but this will take a while

So what do we do about assessments, if AI-proofing is not a viable option? We may not need to AI-proof assignments. Bear with me on this one.

We will need to help prepare our students for the future of work and life where generative AI is likely to be ubiquitous. We also need to graduate students who have achieved learning outcomes.

We can have assessments that do both – but not necessarily in the same assessment, and all the time. This is the foundation of our ‘two lane’ approach at the University of Sydney, which has resonated with educators and leaders around the world.

How do we assure that students know what they are meant to know? ‘Lane 1’ assessments are trustworthy, ‘secure’ assessments where verification or authentication occurs. These would typically be live and supervised. This verification would occur at the program level, and not necessarily within each unit or subject (that would be very expensive!). Lane 1 assessments may involve approaches like interactive oral assessments, or even the occasional sit-down test/exam if absolutely necessary. Lane 1 assessments may even involve the (supervised) use of AI – for example an architecture skills test that requires the use of MidJourney.

(There are excellent reasons why tests and exams are not a preferred approach to assessment. However, as we are in a transition period, the relative (but not absolute!) assurance provided by these assessments helps us stomach the changes that are needed. We just need to ensure that we don’t overuse these, and work to reducing them.)

There are very few examples of ‘lane 2’ assessments currently. These assessments are where AI use is encouraged, scaffolded, and perhaps even evaluated. It’s becoming clearer that:

  • we can’t AI-proof take-home assignments,
  • students will need to learn how to use AI, and
  • students are going to be using AI anyway (and a significant proportion don’t quite understand its benefits and limitations).

Lane 2 assessments embrace these realities. We have some preliminary ideas about lane 2 assessments and are always on the lookout for examples. These assessments are not AI-proofed at all, and in fact shouldn’t be.

Going back to the point that this is part of our students’ future, a quote from a recent ABC article summarises this well: “Potentially AI will take your job but at the end of the day it means we can shift into higher value work.” (emphasis added).

The question therefore becomes how students use AI in these assessments, rather than whether.

But our workload!

110% agree.

Changing our assessments is not going to be a one-semester job, or even a one-year job. But we need to do it – there are still many assessments that sit between lane 1 and 2 (always a dangerous place to drive): not secured, but also disallowing AI. We do ourselves and our students a disservice with these assessments – we don’t (and can’t really) know if students are learning, and students with access and skill around generative AI have significant advantages over others.

So what are we to do? Part of the answer is probably ‘AI’ – as in, find ways that generative AI might help you save time that you can then reinvest elsewhere (and not necessarily back into more work). Another part of the answer might be re-examining what we teach and assess – do my students really need a 2% quiz each week? Do I really need to put that extra slide in about plasmodesmata?

What should learning in higher education look like?

Originally when ChatGPT arrived on the scene, I had considered a few skills and characteristics that were ‘uniquely human’ and which should be the focus of our education in a generative AI world: critical thinking, creativity, culture. Whilst these are still crucially important to education, newer generations of generative AI have shown that they can replicate these and, in some cases like creativity, outperform humans.

More recently, I have started considering other skills and characteristics that embody human education:

  • Curiosity – innate to humans, but can be suppressed by an overly regimented education
  • Connection – core to our needs as humans
  • Community – an expression of connection and essential to social learning
  • Care – between teachers and students, teachers and teachers, and students and students
  • Compassion – an expression of care and a positive learning community
  • Conflict – a necessary part of the human condition and of learning
  • Choice – a fundamental human right

Now, none of these are necessarily going to make their way into program learning outcomes anytime soon, but they are important elements to hold on to for a sector that will be heavily impacted by AI.

As one reflects on one’s own learning journey, one would likely discover that their main learning is ‘learning how to learn’ and ‘learning how to think’ in a particular way. The presence of generative AI in education does not change this. It may slightly reroute how we get there. It may even help even more students from diverse backgrounds get there, if we make access and familiarity a sectoral priority. It will hopefully allow us to accentuate the characteristics that embody human education – many of which have arguably been eroded in the pursuit of scale, speed, and spending.

It is possible that ‘artificial general intelligence’ (AGI) is around the corner. Broadly, AGI is artificial intelligence which can do everything humans can do, and possibly better. The education sector needs to respond by responsibly and productively engaging with generative AI now to prepare teachers and students for an unfamiliar new world that is already starting to appear at speed.

How do I get started with generative AI?

So, where to from here?

If you’re relatively new to generative AI, now is a great time to start dipping your toes in. It’s definitely not too late. Here are a few pointers:

  • At the University of Sydney, staff have a few options to access GPT-4, the most capable AI model currently available.
    • You have access to https://copilot.microsoft.com/ (this used to be called Bing Chat) through your institution login. This is a free, secure way to experience GPT-4, sort-of. (Personally I find undiluted GPT-4 straight from ChatGPT Plus to be much more capable than Copilot, but it’s a start). The benefit of Copilot is that it’s connected to the internet and so can also search for live information.
    • Staff have access to Cogniti through institutional sign-in and can use the ‘vanilla GPT-4 agent’ within Cogniti for a ‘raw’ text-based GPT-4 experience.
    • Staff can also reach out to Educational Innovation directly for access to ‘raw’ text-based GPT-4 experience sponsored by Educational Innovation.
  • Find out what other educators are doing and how they are using AI. We recently ran a symposium where 27 speakers from Australia, New Zealand, and the US candidly shared their experiences using AI meaningfully in their classrooms and assessments. The recordings are available and, judging by the over 1,900 people who registered, this kind of sharing is what educators are hungry for right now. Many have commented to us afterwards that they were sceptical or fearful of AI, but seeing how others used it helped them to start exploring. Also, talk with your colleagues and students – we’re all exploring this together.
  • Start using generative AI for day-to-day things. Perhaps you need a quick Excel formula – explain this to AI and try what it says. Perhaps you need a quick review of an assessment description to check for clarity – ask AI and see what it suggests. Perhaps you need some ideas for an evaluation survey – tell AI about what you’re trying to do and check out its suggestions. Perhaps you need the main points from a long article – try this out on AI. The interesting thing about these modern generative AIs is that working with them can be as simple as explaining, in plain language, its role, its task, and any requirements. Much like how you learned to use the internet (by trial and error, mostly), learning how to leverage AI as a tool is a process of try and see.
  • Explore the ethics around generative AI. Leon Furze’s series on AI ethics is one of my go-to resources. For example, some calculations suggest that generating 1000 images using AI is the equivalent to driving over 6 km. The New York Times – OpenAI lawsuit is also live, and has interesting implications for copyright and ownership.
  • Then, explore some more possibilities for you and your students. The Teaching@Sydney AI in Education page is a good place to start. Also, we’ve curated a constantly-growing list of interesting resources and articles from around the internet.
Written By
More from Danny Liu

Thinking in Canvas: Arc turns videos into two-way learning experiences

How do you get the most out of videos in teaching? Arc...
Read More