Content-filtering AI systems—limitations, challenges and regulatory approaches

Social media icons encircle a woman's head

Artificial intelligence (AI) is reshaping the online landscape, offering platforms unparalleled capabilities to curate and moderate the vast oceans of content we navigate daily. From social media to search engine platforms, online service providers increasingly rely on content-filtering AI systems to sift through and regulate digital content. However, there are growing concerns over potential censorship and suppression of essential voices when using AI systems, impacting the free flow of information.

Hence, Professors Althaf Marsoof, Andrés Luco, Harry Tan, and Shafiq Joty, from Nanyang Technological University, collaborated to shed light on the challenges of AI-driven content filtering and its impact on human rights, particularly the right to free speech. In their paper titled, ‘Content-filtering AI systems—limitations, challenges, and regulatory approaches’, they have proposed a regulatory framework that promotes responsible, ethical, and effective use of AI in content moderation.

“With recent incidents of legitimate content being flagged and removed in the online space, our research calls for the need to regulate the design and use of AI in content filtering. As jurisdictions increasingly encourage or mandate the use of automated systems for content moderation, it is also crucial to address the limitations and challenges, such as dataset biases and lack of transparency”, remarked Professor Marsoof on the motivations behind the research.

Double-Edged Sword of AI Content Filtering

Powered by deep learning, AI content filters recognise features in complex data inputs such as human speech, images, and text, thus making it more effective than traditional content filtering. Having said that, AI content filtering has various drawbacks primarily associated with the quality of their training datasets.

The paper highlights that the accuracy and reliability of moderating content can be severely compromised when it is not trained using diverse and well-annotated data. Furthermore, human biases in training dataset selection and coding could lead to erroneous content removal decisions. For instance, individuals have differing perceptions on classifying hate speech or even fake news; hence, the decisions made by AI systems to remove objectionable content might be inaccurate.

A user clicking on real news and avoid fake news

Moreover, the dynamic nature of online content means that real-life data often varies from training scenarios, making it difficult for AI to assess and moderate content accurately. Hence, the context surrounding the content is crucial in understanding whether it should be removed or remain on the internet. This gap emphasises the need for transparency, accountability and explainability in AI operations to prevent unjustified censorship and ensure that users' right to freedom of speech is not inadvertently violated.

“Regardless of the context, pornographic material depicting minors is deemed unlawful and can be picked up easily by AI systems. Whereas AI systems may lack the accuracy to determine whether speech is offensive and should be removed as they cannot fully appreciate contextual elements, recognise human emotions or discern the speaker’s race based on online statements”, said Professor Marsoof.

Ethical Principles and Regulatory Frameworks

To protect human rights and well-being, the paper further highlights the need to incorporate ethical principles in regulating the regulation of content-filtering AI systems. These ethical principles include transparency, explainability, fairness and human-centricity. The researchers also suggest the establishment of robust legal frameworks and certification standards for content-filtering AI systems. Notably, they have proposed that content filtering must be subject to human review. Based on the nature of the content, they envision four distinct scenarios based on the risk associated with the content and the accuracy of the AI systems in flagging harmful content.

A man shown to be providing legal services with global network icon in the foreground

The first scenario involves high-risk and high-accuracy cases in which the content poses a high risk to society, but its legality can be identified easily, e.g., child pornography. Thus, it is recommended to use content-filtering AI systems to flag and remove such content, with post-removal human review if content authors disagree with the removal.

In contrast, low-risk and low-accuracy situations arise when objectionable content does not pose a severe threat to society. However, determining the legality or otherwise of such content depends on many highly contextual and complex factors, rendering automated removals less accurate. Using content-filtering AI systems to detect and remove objectionable content is less than straightforward in such situations. For example, in copyright enforcement, determining infringement can be difficult when content is similar (but not identical) to a copyright-protected work, as much would depend on whether the similarity is substantial and the extent to which the fair use defence applies. Similar complexities arise in the enforcement of trademarks when content incorporates material identical or similar to a registered trademark, often requiring an assessment of consumer confusion, which is difficult even by human standards.

Thus, for low-risk and low-accuracy situations, service providers should not be obligated to act solely based on AI detection; instead, the responsibility should fall on the right-holders/affected parties to notify service providers of any infringement of their rights and request that the objectionable content be taken down. However, if providers employ AI for content filtering and removal, legal requirements should mandate a human review after removal.

High-risk and low-accuracy scenarios include content that promotes terrorism, hate and violence, and fake news or misinformation that is likely to threaten the public interest. Therefore, mandatory human review is required, as AI systems lack the context or have a nuanced understanding of language to determine the lawfulness of the content. For low-risk and high-accuracy situations, it is recommended that human review of automated content removals only occur when such takedowns are challenged. This involves, for example, clear-cut cases of intellectual property infringements, such as content that amounts to a complete reproduction of a copyright-protected work or content that proposes the sale of a counterfeit product.

“Even if technology achieves perfect accuracy in the near future, I believe that human review is still necessary, but the nature and frequency of review may change. In cases where an error is made, or someone disagrees with the outcome, we should require humans to investigate the issue and determine whether the AI system was accurate in the first place”, added Professor Marsoof, regarding human involvement in AI systems.

Child about to touch the monitor keyboard

Online Safety for Internet Users in Singapore 

With most young children using smartphones daily, online service providers and policymakers must create a safe online space. A survey found that two-thirds of youth had experienced online harm, causing distress and anxiety (Kurohi, 2022). Therefore, in early 2023, the Singapore government enacted the Online Safety (Miscellaneous Amendments) Act to enhance online safety. The Act enables the media authority to issue directions to restrict access to harmful content found online, such as content inciting terrorism or depicting child sexual exploitation (MCI, 2023). As online content can spread like wildfire, service providers need to take quick action when users report harmful content and implement content-filtering systems to protect internet users and the public. 

Balancing Innovation and Responsibility 

The collaborative research provides a critical roadmap for navigating the intricacies of AI-driven content filtering. By advocating for ethical principles and regulatory frameworks, the researchers highlight the potential for AI to serve the public good while cautioning against its misuse. Thus, we must continue to seek a balance between innovation and responsibility, ensuring that AI enhances, rather than inhibits, our collective human experience. 

Note: This research paper was published by the Journal of Information & Communications Technology Law (Taylor & Francis) in May 2022.

References 

Kurehi, R. (2022). Budget debate: Singapore to introduce laws to tackle online harm, ensure child safety. https://www.straitstimes.com/singapore/politics/budget-debate-singapore-to-introduce-new-laws-to-tackle-online-harm-ensure-child-safety 

Ministry of Communications and Information. (2023). Online Safety (Miscellaneous Amendments) Act takes effect. https://www.mci.gov.sg/media-centre/press-releases/online-safety-act-takes-effect-on-1-february-2023/

Althaf Marsoof is an Associate Professor at the Division of Business Law of Nanyang Business School, Nanyang Technological University, Singapore. He is currently the Deputy Head of the Business Law Division and a Fellow of NTU’s Renaissance Engineering Programme. Althaf specialises in intellectual property law (with a focus on trademarks, copyright and third-world perspectives) and technology law.

Harry Tan is an Associate Professor at the Division of Business Law of Nanyang Business School, Nanyang Technological University, Singapore. He teaches graduate and undergraduate courses in Business Law and Tech related Laws at Nanyang Business School. Harry conducts technology policy research, related advisory services, and training related to current new media compliance matters, such as securing online intellectual property and the new Personal Data Protection Act.

This research paper is a joint work with Profs Andrés Luco and Shafiq Joty (Nanyang Technological University).

This article is based on the following research paper:
Content-filtering AI systems—limitations, challenges and regulatory approaches