Triple AI: The way forward to leading organisations
Professor David De Cremer from NUS Business School discussed Triple AI, the notion that artificial intelligence (AI) works with humans to create augmented approaches for value creation
In the second edition of our NBS Knowledge Lab Interdisciplinary Distinguished Speaker Series, Professor David De Cremer from NUS Business School discussed Triple AI, the notion that artificial intelligence (AI) works with humans to create augmented approaches for value creation. Specifically, Professor De Cremer shared his ideas on the type of leadership required to integrate AI into daily operations in a sustainable manner. This webinar was moderated by Professor Xin Chang, Simba, Associate Dean (Research) of NBS and Associate Professor Krishna Savani from NBS’s Division of Leadership, Management and Organisation.
Humans and AI go hand in hand
Focusing on the use and employment of AI in organisational settings, Professor De Cremer emphasised that business leaders should, before bringing AI into an organisation, (1) know the limitations of AI in an organisational setting, (2) recognise that conducting business with AI does not mean having an analytics culture, and (3) understand that AI implementation does not give immediate returns. Importantly, Professor De Cremer stressed that humans have a huge part to play in AI — without people, AI will not work.
AI adoption and management
Most organisations today use a cost-benefit analysis narrative to talk about AI adoption. Such a narrative assumes that AI and humans have the same abilities, thus leading many to fear replication by AI. Furthermore, instances of AI directing human behaviours are very much present, from Netflix’s algorithms to Amazon’s assembly lines to surveillance technology used on employees working from home during the COVID-19 pandemic.
However, AI only seems intelligent as it imitates human behaviour by looking at data observations about us and learning from it. In this sense, routine and repetitive tasks in closed systems are easy for algorithms to imitate and excel in. But this characteristic becomes a problem when it comes to leadership as many of us believe that leaders should bring something different and authentic to the table. Professor De Cremer asserted that one can either manage or lead when running an organisation. AI is perfect for closed management systems, such as administrative work involving data management. Yet, Management by Algorithm (MBA) is not so straightforward. Citing Amazon as an example, Professor De Cremer pointed out how employees supervised by algorithms experienced burnout and stress, and could find no human to talk to or negotiate the situation with.
Importance of soft skills
Surveys have shown that this is not the kind of work culture that people are looking for. Even as AI is implemented, business leaders need to consider how humans feel and find ways to motivate them in their work. Soft skills are more important now than ever as AI does not understand context, participate in interactions, or possess a moral compass. While AI may target work in closed systems, humans dominate open systems. Leadership (unlike management) is about dealing with and reacting to changes and volatility. While digital upskilling is important, Professor De Cremer argued that we need to become better at empathy, emotional intelligence, critical thinking, creativity, and ethical judgements.
Therefore, the business model of the future is not one of the zero-sum game but one of collaboration. Professor De Cremer, together with chess grandmaster Garry Kasparov, put forward the idea of Triple AI, where the interplay of AI and authentic intelligence creates augmented intelligence. To cut costs, one has to make investments in other areas to maintain a sustainable business. AI is not just about extracting information from data but getting quality data. To do so, we need purpose-driven leaders who can connect with others, and are trustworthy, humble, and diversity-minded. AI is neither good nor bad; that is determined by those who use it and the things they use it for. In order for organisations to clarify what value their businesses can have and how they want to achieve it, there needs to be more human upskilling for leaders first.
Human biases in AI
During the Q&A session, webinar participants asked how human biases can be prevented from seeping into AI. Professor De Cremer reiterated that AI does not have intentions as it does not understand what human experience means. There is, however, a sense that the more we use AI to run organisations, the less leaders will be responsible for them. Inequality, for instance, has increased due to technology and can be attributed to business leaders shying away from responsibilities and the lack of data democracy in the world. Professor De Cremer emphasised that a collaborative model is required — AI needs humans to tell them how to react. In fact, humans function as good advisors to AI as we are generally better at evaluating others than ourselves. Consequently, business schools not only have to teach soft skills but also train leaders in dealing with moral dilemmas and taking responsibility.
Having said that, Professor De Cremer clarified that managers do still need to be technologically savvy in order to understand how AI functions on a basic level. Leaders cannot have data define their business purpose but should define the purpose themselves so as to ask the right questions about the data they possess. Leaders should ensure that data scientists are not working in silos but in collaboration with all parts of the business. While leaders do not need to be technology experts, they need to be experts in asking the right questions and in knowing how to talk to data scientists about them.
Significance of AI during pandemics
On a more recent topic, participants were interested to know whether AI would become more or less important should social distancing become a permanent fixture in work and education. On the education front, Professor De Cremer felt that AI can liberate us from a lot of administrative work in education, allowing educators to focus on more important teaching points. However, there are also threats — in Hong Kong, for example, AI is used to monitor students’ emotions during online classes. Ethical issues are raised when developers go as far as to say that personalities can be determined through these facial expressions. There is also the issue of technological addiction — do we want a society that completely replaces social input with technology? In terms of work, Professor De Cremer suggested that the pandemic has created a lot of opportunities for the next 4-5 years as people find that technology allows them to do jobs anywhere. However, it will not be long before big technological firms join forces, bringing with them another round of changes to the market.
AI with emotional intelligence
Finally, with regard to the possibility of AI developing soft skills, Professor De Cremer drew on the example of AI recognising emotions, pointing out that AI will get better at it as it gets more data. In marketing service delivery, AI will be able to communicate with customers through emotional expression and tones in the future. While customers may respond to these forms of communication, AI ultimately does not have common sense and consciousness. It may be able to manipulate our surface level emotions but it does not understand what the emotion means to us. Once again, organisations and their leaders are faced with a moral decision — the technology may make you money but the experience you are purporting is not authentic and may cost you the loyalty of certain customers too.