On 24 May 2023, the HELTASA Digital Teaching and Learning Team hosted their second Digital Dialogue entitled “How are institutions responding to AI?”. The webinar recording and some slides from the session are available. This was a follow up session to our earlier Digital Dialogue “How do we navigate the AI hype together?” held on 15 March 2023.
What is university for and what is the impact of AI tools?
The first invited guest speaker, Prof Mags Blackie from Rhodes University, contextualised the discussion in relation to the purpose of higher education. The value of a university degree can be measured in various ways (see Cole, 2018), from increasing lifetime earning potential, developing subject matter expertise, forming networks with peers to demonstrating “stick-to-itiveness” in surviving the process. The notion of a university degree can also have the capacity to transcend background. Higher education can also be seen as a “degree factory”, as being about getting the piece of paper at the end from an accredited institution. This notion has been under threat for a while with “degree mills” or even where people forge certificates. A more recent focus has been graduate attributes. Is it about these attributes or is it about skills acquisition? And how do these skills and attributes develop? Prof Blackie sees universities as creating environments where students have access to a coherent body of knowledge and become inducted into the ways in which knowledge in that discipline is produced. Higher education assists to give students access to a new way of viewing the world leading to a new way of being in the world and the potential to change the world. This means it’s much more than a degree certificate.
Generative AI means that the things we have taken as a proxy for demonstrating a change in ways of viewing and being, the products we ask students to submit, need to be reconsidered. For example, can a student produce an essay on a certain topic. Because AI tools can easily generate an essay, they mean those proxies can be pointless. We need to then think about how we can know if there is a shift in the way of viewing and being for a student? The question to address is then: How do we assess that shift in students?
Current responses to generative AI tools in participants’ organisations
Participants were invited to respond to a poll around their institutional responses to AI tools. The results indicated a wide variety of responses across institutions. The most common response (60% of respondents) indicated that there is no official university position or formalised guidelines available yet. This was followed by faculty-specific approaches being in development (38%) and warning students of sanctions if using AI tools when not permitted (33%). Other responses indicated that there is no single institutional approach, educators can choose their own approaches (25%) and that students are allowed to use AI tools if they disclose their use (21%) and institutions consulting students around AI tools and assessments (21%). A later poll indicated that in policy and formal institutional communications academic integrity (73%) is more prominent than discourses of cheating or plagiarism (27%).
Responses at Aga Khan University (Kenya)
Our next speaker, Dr Kendi Muchungi from Aga Khan University, shared the impacts of AI tools at her institution. She highlighted that institutions need to adapt to changes in their contexts. AI tools can be used for content summarisation (leading to efficiencies), critique of writing (as a learning opportunity) and to improve communication and provide research assistance. The institution is looking at how tools like ChatGPT and Bard can be incorporated into the learning, teaching and assessment processes. A key principle is to help students to understand why we are assessing, as a way to facilitate learning rather than a focus on a mark. The focus needs to shift towards more authentic assessment.
Responses at Loughborough University (United Kingdom)
Our final speaker, Prof Sandie Dann from Loughborough University, shared how her institution is looking at academic integrity and AI tools in partnership with students. A key principle is that students need to understand what is expected of them, how academic (mis)conduct is determined and what the consequences are. Their experience around academic misconduct is that the vast majority are misguided students, perhaps because they are desperate, feeling pressured or because they are unclear on what is acceptable or not. Campaigns have been shared with students in various formats and channels (such as social media and in the student union) involving students in the design of those campaigns, to get the language right so that students and staff understand each other. For generative AI tools, policies need to be clear to students what they can and cannot do. Their focus on the revision of policies due to recent developments. AI is just another tool, and we need to think about how we use tools for assessments. We also need to advise students on which tools to use and how to use them and make it clear what is expected of students in an assessment.