Published on 10 Feb 2025

AI Tech Policy Transitioning from AI Policy to Practice

On 6 January 2025, we hosted an engaging discussion on AI policy and the future of work. Moderated by Dr Manish Sinha from NTU Singapore’s NTUitive, the session featured Professor Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy and William W. and Ruth F. Cooper Professor of Management Science and Information Systems at Carnegie Mellon University (CMU) who shared his perspectives on emerging trends, regulatory frameworks, and AI’s evolving role in business, government, and society.

Key Takeaways for Policymakers

  1. Policymakers should adopt flexible governance frameworks that support AI innovation while ensuring safeguards against risks such as misinformation and bias.
  1. Preparing the workforce for AI-driven changes requires continuous reskilling efforts and real-time labour market monitoring to help workers transition into new roles.
  1. Strengthening AI transparency and accountability through independent audits, evaluation frameworks, and clear disclosure standards will build trust in AI systems.

Finding the Balance Between Innovation and Regulation

The rapid advancement of AI has led to diverse regulatory approaches across different regions. The United States has largely taken a market-driven stance, prioritising technological progress with minimal regulatory interference. In contrast, the European Union places greater emphasis on human rights and ethical safeguards, seeking to regulate AI to prevent bias, misinformation, and misuse. Meanwhile, China adopts a state-led approach, using AI governance as a means of economic and social control. Each of these models comes with its own trade-offs, but as Professor Krishnan noted, the central challenge remains: how can policymakers foster AI-driven innovation while ensuring that it is deployed responsibly?

A major concern is the potential misuse of deepfake technology, which has already been exploited in fraud, misinformation campaigns, and intellectual property violations. Professor Krishnan highlighted a case in Hong Kong, where scammers used AI-generated deepfake videos to impersonate senior executives in a multinational company, ultimately deceiving a bank employee into transferring $25 million. “We are entering an era where AI-generated content is nearly indistinguishable from reality,” he cautioned. “This raises urgent questions about detection, governance, and accountability.”

To address such risks, governments worldwide are moving towards stronger AI regulations. The EU AI Act and US executive orders are steps towards ensuring that AI systems operate with clear ethical and security guidelines. However, the pace of AI innovation often outstrips the ability of regulatory frameworks to adapt, making it difficult to create policies that are both effective and flexible.

Singapore, he noted, is uniquely positioned to play a leadership role in AI governance within Southeast Asia. With its reputation for strong regulatory frameworks and technological expertise, Singapore has the potential to establish AI governance standards that balance innovation with public trust. “By taking a proactive approach, Singapore can serve as a model for responsible AI governance,” Professor Krishnan said.

AI and Job Transformation: What Lies Ahead?

Fears that AI will eliminate jobs entirely have been a recurring theme in discussions about automation. However, as Professor Krishnan pointed out, AI is more likely to reshape job roles than replace them outright. He drew a comparison with the introduction of automated teller machines (ATMs) in the banking industry. “There was a time when people feared bank tellers would become obsolete,” he said. “Instead, their roles evolved, shifting towards customer engagement and advisory services.”

The extent to which AI affects different jobs depends on three key factors:

  1. Task Complexity – Highly structured and repetitive tasks are more susceptible to automation. AI is already being used in industries such as logistics and finance, where it automates routine processes like data entry, fraud detection, and inventory management.
  1. Task Frequency – Jobs that involve repetitive decision-making or standardised procedures are more likely to be automated for efficiency. Customer service roles, for instance, have seen a rise in AI-powered chatbots that handle routine inquiries, allowing human agents to focus on complex cases.
  1. Social and Ethical Considerations – In areas such as healthcare, AI is more likely to augment professionals rather than replace them. For example, while AI can assist in medical diagnosis by analysing vast datasets, human doctors remain essential for making final decisions and ensuring patient care is conducted ethically and empathetically.

Professor Krishnan stressed the importance of real-time labour market monitoring, noting that platforms like LinkedIn and online job portals can provide faster insights into workforce trends than traditional economic indicators. “Rather than reacting to job displacement after it has occurred, we need a system that tracks labour market changes dynamically,” he said.

To ensure that workers are prepared for AI-driven transitions, he advocated for continuous reskilling efforts, workforce planning, and public-private collaborations to develop targeted training programmes. “The responsibility of preparing for AI’s impact on jobs does not rest solely on governments,” he noted. “Businesses, educational institutions, and individuals must all play a role in adapting to the changes ahead.”

Ensuring Transparency and Accountability in AI

As AI is increasingly deployed in high-impact areas such as finance, healthcare, and recruitment, ensuring transparency and accountability is becoming more critical. AI-driven systems now influence decisions on credit approvals, hiring processes, and medical diagnoses, making it essential to have mechanisms that evaluate and regulate their performance.

Professor Krishnan drew a parallel with financial auditing, suggesting that AI systems should undergo independent evaluations to ensure they function fairly and reliably. “Trust in AI is built through rigorous oversight,” he said. “We need clear frameworks for assessing how AI models are trained, what data they use, and how they make decisions.”

He outlined three key measures that can help improve AI accountability:

  • Model Cards & Data Transparency – Standardised reports that detail how AI models are trained, what data sources they use, and any potential biases that may exist.
  • Red Teaming – Borrowed from cybersecurity, this practice involves stress-testing AI systems before deployment to uncover vulnerabilities related to bias, misinformation, and security threats.
  • Continuous Monitoring – Unlike traditional software, AI models evolve over time, requiring ongoing evaluation to ensure their accuracy and reliability.

One of the biggest challenges, he noted, is that most organisations procure AI systems rather than building them in-house. This makes it difficult to ensure transparency across different AI solutions. To address this, he proposed the introduction of AI “nutrition labels”, which would provide businesses and regulators with clear disclosures on how AI models were developed and any potential limitations. “Just as food labels help consumers make informed choices, AI transparency tools can help organisations make responsible decisions,” he explained.

Looking Ahead: Data-Driven Policy and Workforce Readiness

Looking to the future, Professor Krishnan underscored the need for data-driven policymaking, incorporating real-time labour insights, interdisciplinary research, and adaptive regulations to manage AI’s societal impact. “AI is evolving faster than any previous technological shift,” he said. “We need governance structures that are flexible enough to keep up.”

Singapore, he observed, has a strategic advantage in AI governance, given its strong institutional frameworks, investments in digital transformation, and focus on innovation. By leveraging these strengths, Singapore can establish itself as a regional leader in AI policy and best practices.

The discussion concluded with reflections on AI’s growing integration into daily life. From personalised content recommendations to AI-powered business tools, AI is already influencing how people interact with technology. “The challenge is not whether AI will become part of our world, it already has,” Professor Krishnan noted. “The real question is how we ensure that it enhances human capabilities rather than replacing them.”

As AI continues to shape industries, governments, businesses, and educators must work together to create an ecosystem where AI is used ethically, responsibly, and for the benefit of society. “The right balance between innovation and governance,” he concluded, “will determine AI’s role in shaping the future.”