Other AI Research

YearProject Title
​2020

Us and AI: Public Opinion on Artificial Intelligence in a Post-Trust Era
Concluded

Principal investigator: Asst Prof Saifuddin Ahmed
Funded by: Ministry of Education (Tier 1)
Grant amount: $84,498.50

While the geopolitical discussions around Artificial Intelligence (AI) and its use in different sectors are proliferating, we have limited academic knowledge of what and how the public thinks about AI. We propose a research project aimed at understanding the public opinion of AI technologies in Singapore and the effects of transparency in algorithmic recommendations on user opinion and behaviour. 
​2020

Understanding and reducing prejudice through perspective taking in Virtual Reality
Concluded

Principal investigator: Asst Prof Saifuddin Ahmed
Funded by: Ministry of Education (Tier 1)
Grant amount: $87,931.02

Race categorization happens almost immediately upon meeting as a person’s race is visually distinctive (Cosmides et al., 2023), and prejudice can be formed almost as quickly. In order to maintain social cohesion, it is important to understand formation and practices of prejudice in societies and subsequently reduce prejudice. Research in the field of prejudice and prejudice reduction still lacks a concrete understanding of the relationships between the various components of prejudice (Fiske, 2000; Paluck, 2009) and in what way different forms and possibilities of prejudice reduction strategies. This project aims at theoretically and empirically investigating prejudice components and effective prejudice reduction mechanisms, targeting both psychological and behavioural prejudice reduction strategies, utilizing a digital technology (Virtual Reality). 

​2022Fostering Empathy for Older Adults Among Young Singaporeans Using Virtual Reality: Employing Embodied Perspective Taking via Construal Level Theory

Concluded

Principal investigators: Asst Prof Li Junting Benjamin
Collaborator: Assoc Prof Jung Younbo
Funded by: Social Science Research Council
Grant amount: $877,552.00

Fostering more positive attitudes towards seniors among young people is essential in promoting a more inclusive society. VR may be a plausible way to reduce perceived outgroup threat and foster positive attitudes towards seniors among the young. In this project I propose and test a theoretical model of fostering more positive outgroup attitudes based on construal level theory and augmented by social identity perspectives.

 

​2022

Leveraging Social Media Influencers and Artificial Intelligence For Psychosocial Wellbeing Interventions Among the Elderly
Ongoing

Principal investigators: Assoc Prof Chen Lou
Funded by: Ministry of Education (Tier 1)
Grant amount: $ 100,000.00

Social isolation and loneliness among older adults during the COVID-19 has become both a domestic and global concern (Wu, 2020). To address these issues and improve the elderly’s psychosocial wellbeing, Wu (2020) discusses serval approaches, including 1) promoting social connections via public health messaging, 2) mobilizing family members, community-based networks and/or resources, 3) advancing new technology-based interventions to increase social connections, and 4) engaging existing health care systems to mitigate the issues. In view of the safety measures and restrictions on offline social networking and interactions, the proposed solutions like 1) promoting social connections via public health messaging, 2) mobilizing family members, community-based networks and/or resources, and 4) engaging existing health care systems to mitigate the issues may face varying level of difficulty in overcoming the barriers to actual implementation. Therefore, in the current project, we focus on developing interventions that are based on innovative technology (i.e., social networking sites) to improve social connections of the elderly. In particular, we leverage on AI-influencers that are based on social media to 1) promote psychosocial wellbeing among the elderly, 2) transmit public health messaging to the elderly, and 3) connect the elderly with other peers and social networks for social support.

​2022Preparing for the Era of AI-powered Synthetic Advertising: Theoretical Development, Practical and Policy Recommendations

Ongoing

Principal investigators: Assoc Prof Chen Lou
Co-PI: Prof Edson C. Tandoc. Jr.
Funded by: AI Singapore
Grant amount: $ 299,904.80

The recent technological innovations in artificial intelligence (AI) and machine learning are transforming the landscape of advertising, which enables highly personalized ads, the use of chatbots in e-commerce and brand communication, campaigns using deepfakes, and synthetic ads created by generative adversarial networks (GANs). This ultimately leads to the proliferation of synthetic advertising – a type of manipulated advertising – in which ads are “generated or edited through the artificial and automatic production and modification of data” (Campbell et al., 2021; p. 1). Alarmingly, synthetic advertising with great manipulation sophistication (e.g., personalized ads, hyper-realistic videos) has been argued to positively affect consumers’ perceived realness and creativity of the ads, which in turn lead to positive attitudes and greater purchase intentions. Also, consumers often find it hard to detect falsities involved in synthetic advertising, which hinders them from making informed decisions. Extant research on this advertising phenomenon and related policy recommendations are lagging. This project is to fill these gaps.

​2022AUTOMATING TRUTH & ACCOUNTABILITY? Public Attitudes toward AI and Human-Machine Communication in Singapore

Ongoing

Principal investigators: Prof Edson C. Tandoc. Jr.
Funded by: AI Singapore
Grant amount: $ 299,975.00

Singapore is leading the Asian region in researching and adopting artificial intelligence (AI), maximising it across industries, from healthcare to manufacturing, from retail to public security. And yet, recent surveys show that many Singaporeans remain hesitant about AI, expressing concerns about ethics and privacy. This is expected to be more pronounced when AI is used in functions traditionally associated with humans, such as in communication, which is focused on the creation of meaning and, ideally, of truth. Media industries are increasingly using AI, giving rise to human-machine communication, such as in news writing, in fact-checking, as well as in chatbots that mimic spontaneous conversation. AI, however, is not infallible. In October 2021, the Ministry of Health suspended its chatbot Ask Jamie after screenshots showing it respond to a COVID-19 related question with a safe sex advice went viral. To achieve AI preparedness and resilience in Singapore, we must understand not only current public attitudes toward AI, but also public response when AI fails. When AI messes up, whom would the public blame? Focusing on the use of AI in fact-checking, which may enhance perceived objectivity and credibility of fact-checks, especially in a region where fact-checkers have faced accusations of bias, this project seeks to investigate public acceptance of AI use in fact-checking through a multi-theoretical and multi-method programmatic approach. 

​2022

ATTAIN*SG: Achieving public TrusT in AI in autoNomous vehicles in SinGapore
Ongoing


Principal investigators: Prof Shirley S. Ho
Funded by: AI Singapore
Grant amount: $ 498,024.80

The proposed project is an interdisciplinary, multi-pronged, and multi-stakeholder investigation of factors motivating and hindering the achievement of public trust in artificial intelligence (AI) governance in autonomous vehicles (AVs) in Singapore. Despite ranking first in the most recent global Autonomous Vehicles Readiness Index (2020), Singapore has yet to fully benefit from a nationwide use of AI in AVs due to safety and privacy concerns as well as low support among its citizens. Building upon a seed study on AI in AVs among vulnerable groups, we now seek to investigate public trust in AI governance in AVs in view of a larger scale integration of AVs on Singapore roads. This includes an analysis of policies in place, how they can be improved, how they are communicated to the public, and the role of policy, governance, communication, and other factors in achieving public trust. Research on trust in AI governance is not entirely new, but what sets this investigation on AI in AVs apart is the capacity of AI in AVs to impact other individuals (i.e., non-AV users) who share the same road. This makes it a significant public issue, and a unique case for trust in AI governance research, especially since accountability in road accidents is arbitrary, and because it can be perceived that the lives of both those who use AVs and those who do not may be at stake. This investigation will include both public and expert opinion, consulting regional experts in Japan and Malaysia in addition to Singapore, to cover experiences from different stages of AV development in the region.

 

​2023

Seeing is no Longer Believing: Examining Deepfake Identification, Impact and Instruction
Ongoing

Principal investigators: Prof Goh Hoe Lian, Dion
Co-PI: Assoc Prof Lee Chei Sian, Prof Theng Yin Leng
Funded by: Ministry of Education (Tier 2)
Grant amount: $ 644,761.00

Deepfakes are artificially created media posing as actual video recordings. While they can be used positively, the majority are ill-intentioned for the purposes for pornography and misinformation. Consequently, deepfake detection is an emerging research area where machine learning techniques are primarily investigated. While useful, a critical gap is the human perspective of identification because algorithms are currently not performing at a level where human judgement is unneeded. Further, deepfakes are relatively new to many people who may then fall prey to such misinformation. The proposed project is underpinned by three I’s: identification, impact and instruction. Its objectives are to: (1) establish baseline deepfake identification strategies undertaken by people; (2) ascertain factors that impact the effectiveness of these baseline strategies; (3) examine the impact of deepfakes on people; and (4) create an instructional program to teach people about deepfakes.

 

​2023

Disabled Workers’ Perceptions of Emerging Technology (AI) at Work
Ongoing

Principal investigators: Dr Victor Zhuang
Funded by: SG Enable Ltd
Grant amount: $ 5,000.00

This project seeks to understand disabled Singaporean workers’ experiences with emerging technologies in their workplaces. It aims to interview 15-20 disabled workers and with specific focus, but not limited to the role of artificial intelligence, focusing on key advances such as ChatGPT, and Dall-e, among others. The following questions are explored: 1) What are the impact of emerging technologies on disabled employees' experiences at work? 2) Are there differences in how they experience emerging technology? 3) What are their sentiments to emerging technologies - do they embrace these or do they feel that technologies are a threat to their livelihoods? 4) How do disabled workers respond to or adapt to emerging technologies? 5) If they are negatively affected, how do they cope (and also how effective are they?)

 

​2024

Avoiding Blind Trust: Evaluating Credibility of Generative Artificial Intelligence Content in Higher Education Information Searching
Ongoing

Principal investigators: Assoc Prof Lee Chei Sian
Co-PI: Prof Goh Hoe Lian, Dion
Funded by: AI Singapore
Grant amount: $ 249,987.79

Making use of online search systems (e.g. search engines) to foster learning is an emerging and novel research trend known as searching as learning (SAL) (e,g, Yigit-Sert et al., 2021). Briefly, SAL involves the active search and retrieval of information to support academic pursuits. SAL contributes to a rich and dynamic learning experience for learners in higher education because it encourages independent and self-directed learning and promotes active management and participation in the learning process (Rieh et al., 2016). The emergence of Generative Artificial Intelligence (GAI) has opened new possibilities for learners, providing them with unparalleled access to an abundance of information which can be harnessed to fulfil learning objectives, making SAL more significant and pertinent than ever before.

Since the content generated by GAI may not be accurate or reliable, conducting fact-checking to ensure the accuracy of the information becomes an indispensable part of the learning process (Hartman et al., 2022). Prior SAL research has also underscored the importance of assessing information credibility (Rieh et al., 2016) to ensure positive learning outcomes. While individuals may vary in fact-checking practices during online information searching, learners in higher education often do not critically evaluate the credibility of the information or accuracy of the online sources for various reasons including information overload, lack of knowledge and insufficient time (Hartgittai et al., 2010; Hartmann et al., 2022). GAI further introduces new challenges in fact-checking during SAL because there is a lack of identifiable source attribution as the content is created by an AI model, making it more difficult for learners, especially students, to cultivate fact-checking practices. Interventions will thus be needed. Put differently, there is a need to intervene because blindly trusting the information produced by GAI without fact-checking can lead to negative learning consequences such as acceptance of inaccurate information, consequently leading to the erosion of trust with the online information ecosystem.

2024

Imaging Skin Tones and Cultural Preferences in Southeast Asia
Ongoing

Principal investigators: Prof Jack Qiu
Co-PI: Prof May Oo Lwin
Funded by: Tecno Mobile Limited
Grant amount: $ 283,400

In a rapidly digitizing world characterized by technological advancements, it is of paramount significance to understand how cultural contexts and situated tastes shape the classification of imaging skin tones and influence aesthetic choices in not only urban but also rural settings. This research examines AI-camera optimization mechanism involving smart phone users of different skin tones. It explores the implications of AI-driven imaging technology for cultural practices, aesthetics, and beauty standards, providing valuable insights for academics, technology developers, and marketers in the Philippines and Indonesia as well as in Southeast Asian and Global South contexts more broadly.
2024

Empowering Senior Self-Health Management Through Smart IoT
Upcoming

Principal investigators: Assoc Prof Kang Hyunjin
Co-PI: Prof Theng Yin Leng, Prof Lee Kwan Min, Prof Goh Hoe Lian, Dion
Funded by: Social Science Research Council
Grant amount: $ 865,806.50

Singapore is facing a significant challenge: an ageing society. By 2030, it is expected that almost one in four Singaporeans will be over 65 years old. In response to this concern, the Singapore government emphasises the importance of preventive measures that assist seniors in maintaining an active and healthy lifestyle. This approach aims to enable seniors to lead meaningful and dignified lives throughout their entire lifespan, ultimately alleviating the burden on the healthcare system in the long run. Smart Internet of Things (IoT), a network of smart devices connected through the Internet, like smart wearables and smart scales has significant potential to enable cost-efficient self-directed health management that can be seamlessly integrated into the daily routines of seniors. However, to fully leverage the potential of smart IoT for seniors’ health management, human-IoT communication should extend beyond mere activity monitoring and alarming functions. Derived from the motivational technology model, the proposed project aims to design human-IoT communication that effectively motivates seniors to actively engage in self-directed health prevention, leading to improvements in their mental and physical well-being. Consequently, our project will primarily focus on relatively healthier seniors (aged 60 and above), who do not have serious health conditions requiring intensive medical treatment.