Assessment in an Age of AI: Some Guiding Principles
I have recently had a period of research leave allowing time and space to delve into (see what I did there!) all things AI and assessment and AI and higher education. Back in April 2024, I wrote a piece for Learning Matters on what might be lost with the advances of generative AI. Since then, my views have somewhat shifted. I am now more pragmatic in my approach but maintain a critical standpoint. As I say to my students, always challenge the status quo! Notwithstanding my shift along the generative AI enthusiasm scale, I have not shifted my view on what I believe should be guiding discussions in a higher education context. For me, the key focus in a higher education context should be the impact of these tools on cognitive ability and functioning. As part of wider work I am involved in for my department (Sussex Law School), I have recently put together a paper entitled ‘Assessment in an age of AI: A Proposed Roadmap for Sussex Law School’.
In this blog post, I share some of the key points from that paper.
1. Pedagogical decisions need to be made
One thing is for sure, we can no longer bury our heads in the sand. Longer term pedagogical decisions need to be made about assessment modes and practice across schools and departments. In making those decisions we should, along with managing marking load and other challenges on resources in the current higher education climate, consider:
1. What is most valuable for our students in terms of process of learning and impact on cognitive ability
2. Assessment validity and security
3. Critical AI literacy skills
Allowing our students to use generative AI tools without detection/permission is, in the long run, a disservice to our students and raises significant questions about validity and academic integrity. We need to accept we are in a world where we cannot control generative AI use outside of supervised settings. It is, of course, also completely futile to ban generative AI use. Generative AI is here to stay and in a law context, it is worth noting that according to a LexisNexis Survey from September 2024 ‘4 out of 5 lawyers across the United Kingdom and Ireland currently use AI or have adoption plans in place’.
Moving forward, as educators, it is for us to decide whether we assess performance of the whole student generative AI system or student unaided performance, once access to generative AI is withdrawn. Those decisions should be guided by evidence-based pedagogy not industry speak. The generative AI tech industry, notwithstanding the rhetoric does not, and I repeat does not, care about our students’ learning and any impact on cognitive ability and functioning.
2. Developmental, Programme Level and Student Centred
Pedagogically and practically, I suggest we assess both performance of the whole student generative AI system and student unaided performance but in doing so, take an approach that is:
developmental
programme level; and
student centred
A. ‘Developmental’
In evaluating the developmental appropriateness of allowing generative AI tools in assessment, convenors should consider:
1. Is utilisation appropriate for the cohort, level?
2. Have students completed a (the law school’s?) critical AI literacy programme?
3. Have students already had an opportunity to master key human specific legal skills? And are we satisfied, as a department, that they have showed competency in those skills?
4. Equity and accessibility: Can students access said generative AI tools equitably?
B. ‘Programme Level’
What do we mean by programme level assessment? Programme level assessment focuses on the degree/programme/course as a whole. It considers the programme or course level learning outcomes. Assessment security at programme level considers the need for assessment OF learning at key ‘touchpoints’ along the programme.
Programme Level Assessment v Programmatic Assessment
Programme level assessment is not the same as programmatic assessment. Programmatic assessment is ‘a systematic approach wherein the outcomes of a variety of purposefully selected assessment tasks are longitudinally collected, collated, and combined to obtain triangulated information about a learner's progress in developing key competency domains and capabilities’. One could apply programmatic assessment in a module via a portfolio approach.
C. ‘Student Centred’ – process over product
In broad terms then, the assessment approach will be process oriented rather product oriented. It will involve some of our assessments being secure from generative AI use (no assessment can be fully secure) at key stages (touchpoints) in our programmes (LLB, MA, LLM) but with the majority of our assessments, where possible, returning to focus on what happens in the classroom with our students (process over product), as part of a portfolio/in seminar assessment approach. Such an approach will also include marks for attendance and engagement as part of that ‘portfolio’. Where a portfolio-based approach is not achievable at scale (core modules/large optional modules), we adopt lower weighted in-seminar assessments (focusing on process) and attendance and engagement marks alongside a securitised in person assessment.
In the ideal then, and where scale allows, modules will include:
1. Marks for attendance and engagement as guided by Universal Design for Learning (UDL) principles [ I am mindful of the literature critiquing attendance and engagement marks for EDI reasons. We need to be alert to this and creative and adaptive in how students can gain attendance and engagement marks.]
2. In seminar tasks/assessments [some including use of gen AI where appropriate]
3. Meaningful formative assessment activities [some including use of gen AI where appropriate]
4. Assessment to be completed in term time.
In broad terms, the approach I propose adopts a combination of the:
“Assessment modes should be an appropriate mechanism for making an inference about what our students have learnt as per our learning outcomes.”
To finish - some headline messages:
1. Assessment modes should be an appropriate mechanism for making an inference about what our students have learnt as per our learning outcomes. Otherwise, what’s the point in assessing at all? If we can no longer make a meaningful inference about learning in a module given the advances in generative AI, we either change the mode of assessment or we change the learning outcomes to embrace generative AI use.
2. Where a case is made by a module convenor to use/allow generative AI in an assessment, consideration must be given to whether that is appropriate for that level, it fits within a wider developmental and programme level assessment approach and that the convenor has accounted for access and equity in accessing tools. (Free version tools, as we know, are less capable than paid for subscription versions.)
3. Law Schools should most definitely be engaging with generative AI tools in teaching. Engaging with these tools can provide important learning moments for students and provide a space for critical discussion about their use, to facilitate learning, testing, contemporary application, ethical implications, impact on learning and so on. Engagement also feeds into commitments to students being work-ready. However, we (schools) do need to be clear about the position on generative AI use for learning, and generative AI use for assessment.
If you are interested in reading the paper in full, drop me an email.
References
Cecilia Ka Yuk Chan and Tom Colloton, Generative AI in Higher Education: The ChatGPT Effect (Routledge 2024).
L Furze et al., “The AI Assessment Scale (AIAS) in Action: A Pilot Implementation of GenAI-Supported Assessment” (2024) 40 (4) Australasian Journal of Educational Technology 38-55.
James M. Lang, ‘The Case for Slow Walking Our Use of Generative AI’ (29 February 2024, The Chronicle of Higher Education).
Mairéad Pratschke, Generative AI and Education: Digital Pedagogies, Teaching Innovation and Learning Design (Springer 2024),
TEQSA, Gen AI Strategies for Australian Higher Education Emerging Practice (November 2024). Available at: https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/gen-ai-strategies-australian-higher-education-emerging-practice