Keynote & Invited Talk

Keynotes

Abstract

The global economy is moving into an era of Intelligence Augmentation (IA). Science fiction often portrays “intelligence” as involving complementary roles of reckoning and judgment. For example, In the Star Trek series the judgment and decision making of ‘Captain Picard’ are enhanced by the reckoning skills (calculations, analysis of multidimensional information, predictions) of the android ‘Data’, a machine without human capacities like emotions. The human and machine work synergistically together to be better than their individual abilities. In the next few years, many occupations will shift to require working with a generative AI-based agent that has complementary skills and knowledge to the human worker. This talk will discuss what types of learning are most valuable for students and workers to prepare for these IA interactions in work and life, as well as how AI may aid in upskilling, reskilling, unlearning, and transfer.

Learn more about the speaker, Prof Chris DEDE

Abstract

Artificial Intelligence (AI) significantly impacts European labor, with estimates suggesting that 32% of existing jobs will undergo radical changes, and half of all jobs could be automated. The transition necessitates new skill sets and preparation to ensure effective collaboration between humans and machines, reducing potential social inequalities. Future research should prioritize enabling individuals to harness AI for community benefit. Educational initiatives like the EU’s Digital Education Action Plan emphasize early development of digital competencies, including AI knowledge. The OECD Learning Framework 2030 and UNESCO’s initiatives support integrating AI skills broadly across disciplines, promoting lifelong learning. The European Commission’s 2021 AI coordination plan builds on these principles by urging member states to expand AI education across various non-technical fields. Despite these strategies, a significant portion of Europeans lack sufficient digital skills, indicating a need for greater investment in AI education and alignment with modern pedagogical approaches. The INFINITE project further aims to equip Higher Education faculty with the skills to ethically use AI, enhancing professional and educational practices.

Learn more about the speaker, Prof Eleni MANGINA

Abstract
 

The use of AI in education (AIED) has a history spanning several decades, marked by periods of varying interest. In recent years, there has been a renewed interest in AIED with the increasing popularity of generative AI. Most people would be familiar with applications such as intelligent tutoring systems and adaptive learning systems. With the introduction of generative AI to the mass market, large language models have also been used to support students in self-directed learning, either by querying or probing deeper into a domain of knowledge. In many applications, the AI system plays the role of a tutor, providing instructions, assessing students, and may even advise students on potential learning paths. Most of these are applications through which students learn from an AI system, aiming to acquire some knowledge. In this talk, Dr Tan will introduce other aspects of AIED, including learning with, learning about and learning beyond AI, which apply to both students and teachers. Such pedagogical framing will help expand our perspectives about AIED so that collectively, we can contribute to building a holistic ecosystem and approaches to AIED.

Learn more about the speaker, Assoc Prof TAN Seng Chee

Invited Talks

Abstract
The remarkable strides in artificial intelligence (AI), exemplified by ChatGPT, have impacted our ways of doing many things. Given its central role in accrediting knowledge and skills, assessment positions itself at the forefront of these impacts. Applying cutting-edge large language models (LLMs) and generative AI to assessment holds great promise in boosting efficiency, mitigating bias, and facilitating customized evaluations. Conversely, these innovations raise significant concerns regarding validity, reliability, transparency, fairness, equity, and test security, necessitating careful thinking when applying them in assessments. In this talk, I will discuss the impacts and implications of LLMs and generative AI on critical dimensions of assessment with example use cases and highlight the challenges that call for a community effort to address.

Additional reading material:

Learn more about the speaker, Dr Jiangang HAO

Abstract

AI for education is often oversold. Let's look at the real challenges of turning teaching wisdom into computer code and the potential of AI in helping students learn. I'll talk about what works, what doesn't, and where we should focus our collaborative efforts to make AI a truly helpful tool for teachers, students and researchers.

Learn more about the speaker, Asst Prof Tanmay SINHA

Abstract
Generative AI (GenAI) is a relatively new phenomenon with already an enormous impact. Many tasks, from searching for information to computer coding have fundamentally changed. For instance, creating computer code is a task that a chatbot like ChatGPT can do for many programming problems. This might move the focus of programming away from the code itself to the more conceptual aspects of the task. In this paper we will explore the consequences for science education, in particular for scientific modeling.

In science education teaching about and with models has a central place, as science is essentially a modeling endeavor (Louca & Zacharia, 2012). Especially creating and exploring computational models with the help of computer simulations is an often-used educational approach (Bravo et al., 2009; van Joolingen et al., 2005). Students create and explore models by entering equations (Teodoro, 2004) or graph-based representations (Löhner et al., 2003).

GenAI is changing all this. Models can be generated and simulated with the help of AI, moving the focus from the construction of models to specifying it in terms that the AI can “understand”. Phrasing proper questions requires for the AI requires understanding of the modeling process at a higher level of abstraction.

In this presentation I will discuss some examples of dialogues held with ChaptGPT on various physics models. Using these dialogues, we can discuss the changing view of what and how students need to learn about scientific models in the light of the developments in AI.

Additional reading material:

  • Bravo, C., van Joolingen, W. R., & de Jong, T. (2009). Using Co-Lab to Build System Dynamics Models: Students’ Actions and On-Line Tutorial Advice. Computers & Education, 53(2), 243–251.
  • Löhner, S., Van Joolingen, W. R., & Savelsbergh, E. R. (2003). The effect of external representation on constructing computer models of complex phenomena. Instructional Science, 31(6). https://doi.org/10.1023/A:1025746813683
  • Louca, L. T., & Zacharia, Z. C. (2012). Modeling-based Learning in Science Education: Cognitive, Metacognitive, Social, Material and Epistemological Contributions. Educational Review, 64(4), 471–492. https://doi.org/10.1080/00131911.2011.628748
  • Teodoro, V. D. (2004). Playing Newtonian games with Modellus. Physics Education, 39(5), 421–421.
  • van Joolingen, W., de Jong, T., Lazonder, A. W., Savelsbergh, E. R., & Manlove, S. (2005). Co-Lab: Research and development of an online learning environment for collaborative scientific discovery learning. Computers in Human Behavior, 21(4), 671–688.

Learn more about the speaker, Prof Wouter van JOOLINGEN

Abstract

The integration of Generative AI, particularly through tools like ChatGPT, into educational settings has been heralded for its potential to revolutionize personalized learning (PL). This presentation delves into the promises and realities of employing chatbot technologies for PL. Through a design study focused on graduate student interactions with a tutor bot during discussions on educational reforms, we critically assess the current capabilities and limitations of ChatGPT in facilitating personalized learning. Our investigation highlights the power and limitations of these tools for supporting educational objectives while identifying gaps and opportunities for future enhancements.

Returning to the fundamentals, it is clear that creating effective systems to support learners is essential, though far from straightforward. By embracing a learning sciences perspective, we can gain a clearer understanding of the systems necessary for successful learning. This insight can help us harness generative AI as tools and components of solutions that are more powerful, adaptable, and scalable.

By providing a nuanced understanding of the relationship between Generative AI and personalised learning, this talk aims to enrich the ongoing discourse on harnessing technology to improve educational outcomes.

Learn more about the speaker, Prof LOOI Chee Kit

Abstract

During science experiments, teachers are limited in their ability to gather meaningful information about student activities. For example, teachers’ cognitive limit prevents them from managing numerous inputs from multiple students (Sherin and Star, 2011), and teachers’ student interaction limit prevents them from being aware of the intricacies of each student’s learning trajectory (Clark et al., 2012). To cope with these limitations, teachers tend to place an undue focus on the procedural steps taken by each student during science experiments (Wang et al., 2010). However, as underscored by Tang et al. (2010), such pedagogical behaviors can distract teachers from a more critical evaluation of students’ scientific thinking. Therefore, knowing students’ actions during science experiments represents a vital piece of information that can help nudge teachers towards the proper conduct of scientific inquiry. With this in mind, I propose the use of computer vision to extract student activity information for science teachers, so as to expand their ability to gather meaningful student information during science experiments. By working with science educators within Singapore’s education system, I examine how the envisioned computer vision system might function in a real-world setting. In this talk, I present qualitative findings on the design considerations for a computer vision system that provides instructional support in science experiments and share an action recognition system that has been constructed to fulfil this purpose. Overall, this work seeks to establish a preliminary understanding of how computer vision could be used as a tool to augment teacher noticing in science experiments.

Learn more about the speaker, Asst Prof Edwin CHNG

Abstract

Since the release of accessible vision language models (VLMs) such as GPT-4V and Gemini Pro in 2023, scholars have envisaged utilizing these artificial intelligence (AI) models to widely support instructors and learners. Particularly, their capability to simultaneously process visual and textual data and yield subsequent information is considered one of the most important features of these user-friendly VLMs. This capability is significant as human cognition benefits from multimodality, which has called for teaching, learning, and evaluation to be conducted in more diverse, sophisticated, and constructive ways. However, these multimodal educational practices are yet to be realized in everyday classrooms, while the integration of AI promises to facilitate this transformation.

In this talk, we will review the hypothesized parallelism between humans and VLMs as multimodal learners and its implications for the potential role of AI models in future education. Additionally, we will discuss the limitations, challenges, and possible remedies to effectively integrate these models into educational settings.

Learn more about the speaker, Asst Prof Gyeonggeon LEE