Assessment
5 Key Pedagogical Principles to Guide Incorporation of Generative AI into Assessment
The decision to incorporate generative AI into higher education assessment is complex and requires careful consideration. Here are five guiding principles to help higher education educators navigate this decision:
1. Prioritise Learning Goals and Authentic Assessment
Before considering any technology, clearly define the learning objectives and intended outcomes of the assessment. Consider:
- What specific knowledge, skills, and competencies are being assessed?
- Will using generative AI enhance or hinder the measurement of these learning goals?
- Can the same goals be achieved without the incorporation of generative AI in the current context of learning and future job preparation?
2. Ensure Fairness, Equity, and Accessibility
Generative AI should not create or exacerbate existing inequalities. It is important to provide clear guidelines and resources to help students understand why the tools are used/not used and how to use the tools. Consider:
- Bias: AI models are trained on vast datasets, which can contain biases that lead to unfair advantages or disadvantages for certain student groups.
- Access: Not all students have equal access to the technology and resources needed to engage with AI-powered assessments.
- Transparency: Students should understand how AI is being used in their assessments and have the opportunity to provide feedback.
3. Focus on Developing Higher-Order Skills
Generative AI can be an excellent assistant to automate tasks that rely on lower-order cognitive skills such as information recall and basic problem solving. Instead of replacing these skills, consider leveraging it:
- Focus effort and time to develop higher order thinking
- Develop AI literacy
4. Maintain Academic Integrity and Value of Human Input (of both assessor and students)
While Generative AI can assist with assessment, it should not replace the crucial role of human assessor. Consider the role of the assessor in:
- Designing meaningful assessments: While Generative AI can support in the design of the assessment, it is the human assessor’s role and responsibility to define the purpose, format, and criteria for assessment.
- Providing personalised feedback: While Generative AI can offer feedback on specific aspects, the assessor should provide holistic feedback that considers individual student needs and learning journeys.
- Upholding academic integrity (through a nonpunitive stance):
- Set and communicate clear rules and expectations as to why, what, when and how Generative AI can/cannot be used by students for an assessment.
- Consequences for flouting the rules and expectations should be clearly articulated.
- Implement measures to deter students from flouting the rules and expectations.
- For potential students who may have flouted the rules and expectations, engage in conversation with the aim to determine intentionality and educate responsible and ethical use of Generative AI in learning.
5. Embrace Experimentation and Continuous Evaluation
The field of AI is constantly evolving, so approach AI integration with a spirit of:
- Piloting and experimentation: Start with small-scale implementations, gather data on effectiveness, and solicit feedback from both students and lecturers.
- Adaptability and iteration: Be prepared to adjust your approach based on the results and evolving capabilities of AI tools.
- Open dialogue and collaboration: Engage in discussions with colleagues, researchers, and students to share best practices, address concerns, and stay informed about the latest developments in AI and assessment.
The 5 key pedagogical principles are distilled from various publicly available guidelines of various universities, including Russell Group, Monash University, Cornell University, etc. The principles serve to guide educators as they harness the potential of generative AI to enhance assessment practices while preserving academic integrity, promoting equity, and fostering meaningful and relevant learning experiences for all students.