The discourse around generative AI in higher education has heard from academics, researchers, and thought leaders – but the student voice has been largely missing. In early March, we held two forums where students from across the University shared their insights about what ChatGPT means for their study and their future. With over 450 live participants, we weren’t able to get through all the pressing questions from the audience, so we asked some of the panellists to follow up with their thoughts.
In the second part of this series, read what students had to say about your top questions on the ethics and integrity of generative AI, and what it means for the value of university. Read all their answers below, or jump to a particular question of interest.
- Are you confident that you understand the potential biases and limitations of tools like ChatGPT, in terms of the knowledge base it relies on to learn and provide its responses?
- What kind of guidance would you want from your lecturers or the university surrounding the ethical use of generative AI for assessments?
- If AI tools start to be paid-only, should the University make them available to all students?
- There is a fear that there might be an increase in academic integrity cases where students use artificial intelligence to write their assignments. What would you say to your teachers about this?
- Should we make a move back to having in-person exams, in order to counter the perceived threat of AI-generated assessment responses to academic integrity?
- If you found out that a group member used ChatGPT/DALL-E/other generative AI to produce their contribution to a group assessment, how would you handle it?
- If AI is used by students to locate information and produce responses to questions and problems, how are the bedrock skills of education, i.e. critical research skills and critical thinking, developed?
- How do you believe ChatGPT has impacted what you interpret as the value of your degree? What suggestions do you have for improving the ‘value’ of your education in light of ChatGPT and other generative AI?
Ethics of generative AI
Are you confident that you understand the potential biases and limitations of tools like ChatGPT, in terms of the knowledge base it relies on to learn and provide its responses?
Students were quick to acknowledge that biases exist, but that these were not clear, and that they (and their peers) would rely on trusted sources such as their teachers to help them better understand these. “I feel that the biases are dangerous because not many people understand where they come from and students may be oblivious to the biases if they are looking to use the AI tools as a desperate measure for a quick answer,” said Mary, a first year Business student. Isabella, a 5th year Law student, went further to say that these issues were not front of mind for students even though they should be. “ChatGPT itself provides very little information about this on their platform, and the discourse I have heard around it from my peers has not been about the risks, biases or limitations,” she said. “I strongly believe that our teachers should be educating us on this and help us understand this technology we are using.”
What kind of guidance would you want from your lecturers or the university surrounding the ethical use of generative AI for assessments?
Students noted the need for clarity and examples. Marc, a first year Science student, said this involved “clear expectations as far as exactly what situations are appropriate to use ChatGPT in, and advice for how to use it best specific to that subject/field.” Isabella expanded on this with a practical example, saying, “a demonstration would be helpful, of what is allowed and what is not allowed. Also, a provision of examples would be helpful to make clear the uses students can apply when completing their assessments e.g. ‘Using ChatGPT to search a summary of John Austin’s theory of law to help you understand a quick overview is allowed, but copying and pasting that summary into your paper is not, as this is not a peer reviewed source. Be aware that any answer generated cannot be definitely proven to be correct as it is compiled from a number of sources, some of which may be invalid.’”
Students were also keen on teachers and the University providing explanations on the issues around AI to build their critical engagement with it. Isabella felt that “an explanation and breakdown of the potential biases, limitations and risks of use in the academic context” would be useful, and Angad, a second-year Engineering student, suggested teachers could “explain the drawbacks of the AI, and actively integrate it into tasks”. He added that this could involve showing students the responses that AI provides, and emphasising the need to still develop disciplinary expertise to evaluate the quality of the AI responses.
If AI tools start to be paid-only, should the University make them available to all students?
Students were almost unanimous in agreeing with this, with Jack, a second-year Engineering student, saying that “otherwise there is a definite inequity among students”. Marc noted that the pervasiveness created an urgency to ensure equitable access, saying “as this is a tool that a bulk of students would inevitably use in their education, it would be ethical to make this available for all students”. Bronte, a second-year postgraduate student from Architecture, Design, and Planning, agreed, pointing also to the future: “as a tool which is more than likely going to be advancing in the future, it would be imperative to make AI available to ensure an equal standing ground for all students“. Connie, a third-year undergraduate student also from Architecture, Design, and Planning, emphasised the equity element, saying, “all students should have the same resources as one another, being of a lower income should not be a reason why other students can do their assessments more efficiently”. Finally, Marisse, a fourth-year student from Medicine and Health, brought these together, saying “it is the future trend of how people work and it is definitely a very useful tool.” She went on to say:
allowing all students to access it during their university studies is important for developing equity and preparing students for the future and building ethics around using AI tools
Mary was more cautious, saying, “I don’t believe that the university should provide all students with access to paid only AI, because I think it encourages and promotes heavy use of AI instead of occasional use”. She encouraged a sector-wide approach to considering making AI tools easily available, saying, “if [the University] were to provide access to AI, a consensus with other universities across Australia would be more equitable”.
Academic integrity and assessment security
There is a fear that there might be an increase in academic integrity cases where students use artificial intelligence to write their assignments. What would you say to your teachers about this?
“We know this is a difficult time for teachers,” Isabella acknowledged. “The traditional ways of learning… are changing. But this technology is now our present and the future, so we need our teachers to prepare us for it. A lot of us still do not fully understand this technology either and we need to be taught how to use it productively for ourselves, for tertiary education, and in greater society through our careers.
Students thought that the best way to protect against academic integrity issues was actually to engage with AI tools, and help students learn about how to productively and responsibly use them. “The best course of action is for teachers and the university to address the presence of ChatGPT and how to appropriately use it,” said Girisha, a second-year Engineering student. “The university should also highlight that AI tools such as ChatGPT do not always give you the correct answers or the answers could be biassed,” added Marisse. “Therefore, students should only be using AI as a helping tool and critically think if the information given is correct, instead of submitting the work from it without thinking.”
In addition to helping students see the realities of AI tools, academic integrity education was also seen as important. “The university has been doing a great job in educating the consequences of plagiarism, such as academic penalty, which has a positive effect in preventing plagiarism. I think the university should continue emphasising the consequences of academic integrity,” said Marissa. Students were also realistic about the impact of AI on integrity – Bronte pointed out that there would be telltale signs of content written by AI without human expertise: “Using AI still requires a base knowledge of content and critical thinking skills, students who don’t have this will be very easily caught out”. Girisha was similarly realistic, saying, “it is important to note that already more viable forms of cheating have existed before ChatGPT such as contract cheating due to the human side that generative AI doesn’t have”
Isabella had some wise words to share:
Please do not assume the worst of us. Rather, teach us how to use this technology in the right way and learn alongside it
Should we make a move back to having in-person exams, in order to counter the perceived threat of AI-generated assessment responses to academic integrity?
Students felt that there were no real differences between in-person exams versus online exams in terms of integrity benefits, although some recognised that adapting assessments take time. “We understand that there may be a grace period in which the university needs to adjust to [AI] technology and until they know how to control it, they may wish to fall back on the tried and true method of in-person exams,” said Isabella. However, they had consistent messages around seeing AI as an opportunity to catalyse change in assessment practices. “It is important to embrace changes that come with time such as moving to a more digital age rather than reverting back to the more traditional methods,” said Girisha. Connie agreed, saying that examinations may just be “yet again another way of getting students to memorise answers rather than think critically,” emphasising the need for assessment redesign.
Isabella again had wise words for the University community on this. “The university should not let their fear of students using generative AI in assessments keep assessment processes rooted in the past,” she said. “Tertiary institutions should be learning how to adapt to these dynamic changes rather than retreating to traditional methods.”
If you found out that a group member used ChatGPT/DALL-E/other generative AI to produce their contribution to a group assessment, how would you handle it?
Students’ responses to this question presented an instructive model for how they might imagine staff might engage with suspected AI usage for assessments. “I would first ask them how they used generative AI to produce their contribution,” Isabella said. “Was the entire answer produced by the technology? If so, I would tell them that this will not align with university policy and will likely breach academic integrity. I would then ask them to re-write their contribution so as not to put our group assessment and all members in jeopardy. However, if they have only used the generative AI as an initial basis to their contribution to gain background information and for other minor purposes, meaning their contribution was still primarily their own work, I would make sure that they have [acknowledged] their use of the generative AI and ask them to cross check their work, ensure it has relevant peer reviewed citations that make it valid and adheres to all the assessment criteria, sufficiently answering the assessment question.”
Other students had a similarly educative and supportive approach. Angad suggested he would “ask them to cross check the assignment and perform quality controls. A way of doing that would be looking at the marking criteria in detail and seeing if every point is addressed and a good mark is possible. I would also ask the rest of the group to read, without telling them it’s been written by AI, and ask them to give a peer review.” Girisha had similar thoughts; “I would check in with the group member and ask specifically how they used generative AI for their contribution i.e. did they use it for inspiration/research or copy the entire answer from AI. If it was the latter, I would ask them to make sure they have referenced the use of AI and try their best to modify the answer and check with the other group members the best course of action.”
Some students suggested that group members should be expected to self regulate, perhaps driven by group expectations. “I would probably hope that group members would disclose whether they have used generative AI as a ‘significant’ backbone for their work,” said Mary. “I wouldn’t be upset unless it was clear that the majority of their writing was sourced from it.” Victorian, a third-year Business student, added that she “would bring up the fact that it is noticeably done by AI and if I notice it, the marker would definitely notice and they should work on it more. However, by then I would already be annoyed they made it my job to quality control their part of the assessment”.
Value of university and qualifications
If AI is used by students to locate information and produce responses to questions and problems, how are the bedrock skills of education, i.e. critical research skills and critical thinking, developed?
Students were united in saying that AI is likely going to reduce some of the ‘tedium’ of research and inquiry, whilst elevating the need for human perspectives and insight. Angad likened ChatGPT to the availability of other research tools. “Google Scholar changed the way research occurs, allowing us to research from home… and I think ChatGPT is another step forward in the same direction,” he said. “If anything, it makes the tedious parts of research more efficient.” Victorian had similar thoughts, and suggested that the discourse on AI’s impact on the development of critical thinking “emphasises the point that [AI] only removes the tedious parts of assignments leaving more time for critical thinking if anything”.
Some students highlighted the perspective that AI actually puts greater importance on the need to develop these critical skills. “Critical research skills including reading a variety of sources such as research studies, opinion articles, and journal articles are extremely important,” Mary said. “Having an AI tool that provides one [with] simplified response limits the amount of perspectives and amount of information for us to process and assess.” Mary saw this as a negative, suggesting that AI’s biases might limit diverse insight. Connie suggested that by looking into ChatGPT’s responses, students might actually develop stronger critical skills by needing to evaluate its accuracy.
Isabella emphasised that “AI only puts greater emphasis on our soft skills to critically research and think. It highlights the irreplaceability of a human touch, and our ability to add personal reflection and insight into solving an issue and making it relatable to wider society. This is an opportunity for teachers to focus on how to further develop students’ critical thinking skills, as much of the research and busywork can be done for us, and will be done for us in our future careers.” She then added a challenge for us in higher education around the development of critical research and thinking skills; “the tertiary education sector must shift their approach and adapt accordingly – focusing on developing [these] skills in students that have now become ever more valuable to employers”.
How do you believe ChatGPT has impacted what you interpret as the value of your degree? What suggestions do you have for improving the ‘value’ of your education in light of ChatGPT and other generative AI?
“I think the uncertainty of it is what is scary,” Victorian started. “But as it stands now, I think employers would see just how difficult it is to replace humanity with AI.” Other students agreed with this, emphasising that university was about developing the deeper thinking skills that only humans can achieve. “I think ChatGPT will have a minimal impact on the value of my Architectural Science degree as it requires specific skills which seem, at least at this time, to be best answered by human problem-solving and thinking,” said Bronte. Mary, from Business, had similar thoughts: “I believe that such critical thinking and research skills are more attributable to humans than AI. While AI may improve in the future, I believe that humans are [still at] the forefront of business endeavours and procedures.”
In terms of practical advice for teachers, Bronte suggested that “by ensuring assignments are specific and focus on developing critical skills which can’t be easily replicated by ChatGPT, the value of education will remain as high as it currently is.” Connie felt similarly, saying AI could be used as a stepping stone to the development of critical, human skills. “[By] using ChatGPT as a draft, having a shift towards [the] importance of referencing and sources, the value of my degree comes from understanding different sources enough to be able to formulate my own beliefs and choices,” she said, repeating again the idea from others that AI could present an efficiency gain that affords students more time on developing higher order skills.
I am proud to be studying at a point in time where ChatGPT has emerged as I know it will form a significant part of my future career and therefore it is very useful that I am learning how to use it productively now in preparation for such
We’ll leave the last word to Isabella, who encourages the University to prepare students for the future. “I believe that ChatGPT has increased the value of my degree,” she said. “I am proud to be studying at a point in time where ChatGPT has emerged as I know it will form a significant part of my future career and therefore it is very useful that I am learning how to use it productively now in preparation for such. I know my future employers will expect me to know how to use it and so I see that it could be advantageous in terms employability if the university develops a comprehensive way to educate students on how to use these technologies productively. It will also significantly benefit the university in its image as a dynamic tech-forward and modern university who goes the extra mile to prepare their students for the workforce by prioritising preparing their students through use of these technologies.”
Tell me more!
- Check out the complete coverage of AI and education and how it impacts the University of Sydney, including practical tips for teachers on what to do in class and for assessments
- Read part 1 of this series where students answer your questions about the impact of AI on assessments and their future
2 Comments