Artificial Intelligence: Impact on Public Safety and Security

An integrated and nuanced approach, deploying both humans and machines, will be needed to counter the emerging technology’s potential for malicious abuse.

RahhulD_JevonT-lead image

Date Posted

8 Aug 2019


Issue 21, 30 Jul 2019

In the American television drama Person of Interest, “The Machine” is an advanced computer system that functions as a vigilante crime-fighting tool: it employs pattern recognition to stop crimes before they materialise. This premise—crime prevention with the help of Artificial Intelligence (AI)—is no longer so far-fetched.

An increasingly sophisticated technology, AI could support preventive policing to bring about a safer community. But are there any downsides we need to be aware of? What are AI’s possibilities as well as potential risks in the context of public safety and security, and what can we do to mitigate potential downsides?

AI Enhances Operational Effectiveness

As a set of technologies that simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning,1 AI can enhance operational effectiveness through automation and augmentation. When combined, they complement human expertise, producing faster and better results. While AI can spot patterns that may escape the naked eye, humans can contextualise data insights and decision-making with intuition and experience.

The automation of data-heavy processing tasks, from visual inspections of public spaces to interpreting security video footage, can help to overcome resource constraints. This frees up scarce human capacity for higher-value work and more complex problem-solving, boosting workplace productivity and engagement.

Machine self-learning capabilities have predictive and prescriptive uses. AI creates new sense-making possibilities by quickly generating insights through deeper analysis of data.

While AI augments capability, it cannot entirely replace humans.

Automating the Home Team’s Operational Capabilities

In Singapore, AI has already found its way into a variety of Home Team2 border security and homeland security applications. AI-driven perception, processing, and analysis are essential for collecting, sorting, and interpreting data to better inform human decision-making. A leading AI technology now being deployed is machine-learning computer vision technology. AI-backed biometric systems have also become more powerful than ever in spotting patterns in human physiology.

AI—at the intersection of machine learning and robotics—has also given rise to autonomous systems that can tackle more challenging tasks in a wider range of environments. While sensors can provide data inputs to systems, the AI element helps to filter and make sense of data, and can recommend particular actions. Unmanned Aerial Vehicles (UAVs) are robotic autonomous systems that give our officers a bird’s-eye view of a situation, so they can make better ground decisions. In the future, the UAVs could incorporate AI in the following forms:

a. "Computer Vision & Learning”—the ability to analyse visual input;
b. "Machine Perception”—the ability to processing input from a variety of sensors; and
c. "Motion Planning”—the ability to break down a desired path into smaller, more manageable segments.

The Singapore Civil Defence Force (SCDF) has deployed UAVs in monitoring activities outdoors and in public spaces, such as fire tracking, surveillance, and Search and Rescue missions. The integration of these systems complements current operations and aims to improve operational effectiveness. An example is SCDF’s use of a Red Rhino Robot (or 3R) for autonomous fire detection, with an auto heat-seeking mechanism to help find heat sources. This robot can potentially reduce a traditional four-man crew to a team of three, and penetrate far deeper into the seat of fire without risking a human firefighter.

Augmenting the Home Team’s Operational Capabilities

UAVs also augment police neighbourhood patrols. The UAVs can transmit a live aerial video feed to a Police Operations Command Centre (POCC), facilitating their dispatch to the crime scene. Advanced sensors, intelligent autonomous navigation and mapping algorithms may be progressively added to these UAVs to improve obstacle detection and avoidance. Gallery of photos

The Home Team is well aware that AI is not a magical silver bullet that will solve all problems: different operations call for different degrees of technological intervention. While AI augments capability, it cannot entirely replace humans. The use of UAVs, for example, enhances the present force’s capabilities and effectiveness, with the same manpower resources. But our frontline officers remain relevant to the community they serve in. Officers bring a human touch, and an assuring sense of safety and security to the community. Human touchpoints that communities value cannot easily be replaced by AI.

AI Integration in Singapore's Border Security Operations

Iris scans were introduced on a trial basis at the Woodlands Checkpoint in July 2018, enhancing the existing network of cameras with the facial recognition capabilities of the Automated Biometric and Behavioural Screening Suite.

Read More

Case Study: Emergency Management

The Ministry of Home Affairs uses UAVs (also known as drones) to conduct aerial surveillance for forested operations, fire management and crowd monitoring for mass public events such as the New Year’s Eve Countdown.

Read More

Potential for Exploitation

Any emerging technology is a double-edged sword, with potential for abuse by malicious actors. Automation and augmentation through AI have contributed to such widely reported abuses as cybersecurity breaches and fake news distribution. Understanding how malicious agents can manipulate AI technologies to their advantage is crucial in mitigating potential threats.

The “Thinking” Malware

In 2017, 62% of the attendees at Black Hat USA 2017—the world’s leading information security conference—said they believe artificial intelligence will be used for cyberattacks in the near future.3 In fact, this has already happened. IBM security researches have uncovered a new breed of AI-powered cyber-attacks that can automatically target vulnerabilities with greater speed and accuracy.4 Deep Locker, a recent product of IBM Research, demonstrates how AI-powered malware is highly successful at evading traditional detection.5 Automated to attack with peak effectiveness and with self-learning capabilities, each attempt becomes more effective than the last.

The first observed example of an AI-backed malware hack was executed in 2017, on an India-based company.6 Embedded algorithms allowed the software to first observe and figure out the typical user’s network behaviour, and then mimick their digital footprints to evade surveillance detection long enough to complete the hack. Data breaches may now go undetected for longer as AI-powered attacks emulate this detection-evading mechanism.

AI’s Role in Fake News

The ease of access to emerging technologies means AI is as readily available for use by malicious actors as by proper authorities. Deliberate online falsehoods, the online proliferation of false stories often embedded with social, economic and political biases with the malicious intent of misleading audiences for gain, are becoming increasingly common. The generation of these increasingly realistic falsehoods suggest how AI could be manipulated to fool more people more effectively and quickly.

Neural networks underpinning AI technologies have augmented multimedia editing. Almost perfect image and video manipulations are now achievable, creating photo-realistic images and mimicking voices seamlessly. These are known as “Deep Fakes”.7 Discerning between what is real and fake online is no longer straightforward. A viral video of Barack Obama, where the former US President is seen and heard using expletives, was made using Adobe’s After Effects software and the AI face-swapping tool FakeApp. The fake footage was swiftly disseminated across many virtual platforms, garnering over 3.7 million views within a week.8 This shows just how attention-grabbing and persuasive fakes can be.

At present, even an AI of tremendous power will not be able to determine outcomes in a complex social system, the outcomes are too complex—even without allowing for free will by sentient agents...Strategy that involves humans, no matter that they are assisted by modular AI and fight using legions of autonomous robots, will retain its inevitable human flavor.9

—Kareem Ayoub and Kenneth Payne

Strengthening Our Resilience for The Future: AI and Beyond


For all the inherent risks AI presents in self-mutating malware, the answer might ironically lie in harnessing the power of AI itself to strengthen existing cybersecurity setups. SparkCognition, a US-based company, developed an entirely AI-based solution called Deep Armor in 2017.10 It is the first cognitive antivirus software that leverages AI to identify mutating online viruses and detect malware approaches, including advanced malware masking techniques, and stepping up against more sophisticated cyberattacks. AI can therefore be tapped to upgrade cybersecurity capabilities not only in detection and response, but also preventive defence.

In parallel, a deliberate talent strategy will be important, to recruit and deploy those with the expertise to work with AI to boost cybersecurity. For example, Thailand’s government agencies have begun deploying sensors running AI algorithms, incorporating predictive analytics in cyber network monitoring systems.11 At the same time, a new digital forensics team is being developed to specifically investigate digital evidence from cyber-attacks.12 These projects accompany plans to raise existing employees’ digital literacy, while looking overseas to recruit experts. Such a move aims to combine AI-enabled prevention and protection systems’ algorithmic decision-making, with flexible human interaction and supervision.

Dealing with Fake News

Research is already being carried out on how to deploy AI in detecting falsehoods. The machine can be trained to analyse text and determine how likely it is that a particular message is a real communication from an actual person, or a mass-distributed solicitation.13 Building on a similar type of text analysis to spam-fighting, AI systems are also trained to evaluate h ow well a post’s text, or a headline, compares with the actual content of an article someone is sharing online. Another method could examine similar articles to see whether other news media have differing facts. Similar systems can identify specific accounts and source websites that spread fake news.

However, mitigation measures must go beyond technology: the response needs to be all-rounded, involving citizens and public-private collaborations. To inoculate the community against falsehoods, Singapore government agencies such as MCI14 and IMDA have begun efforts to promote better media literacy15 through educational forums, training users to critically evaluate and independently report suspicious information.16

A Broader Perspective

From the security perspective, a multi-agency effort is needed to establish a framework so that agencies understand the appropriate responses to different risks. Relevant agencies are also working together to anticipate and identify emerging security risks linked to such technology adoption, and to build up capabilities to address these risks.

As we gain a better understanding of AI, we will be better at mitigating its dangers. Exciting times are ahead—we have entered a brave new world.


Rahul Daswani led the Futures team at the Ministry of Home Affairs. Previously a Senior Strategist at the Centre for Strategic Futures, he has also served at SkillsFuture Singapore.

He is now Assistant Director, Open Government Products at GovTech.

Jevon Tan is part of a team from the Defence Science & Technology Agency (DSTA) embedded in National Security Coordination Secretariat (NSCS), where he identifies risks and threats relating to emerging technologies. Prior to joining NSCS, he was involved in telecommunications acquisition projects and master-planning in DSTA.


  1. Personal Data Protection Commission, Infocomm Media Development Authority, Singapore, “A Proposed Model Artificial Intelligence Governance Framework” (Working Draft, November 28, 2018, revision).
  2. The Home Team consists of the Ministry of Home Affairs Headquarters, Singapore Police Force, Immigration and Checkpoints Authority, Home Team Academy, Internal Security Department, Singapore Civil Defence Force, Singapore Prison Service, Central Narcotics Bureau, Casino Regulatory Authority and the Singapore Corporation of Rehabilitative Enterprises.
  3. The Cylance Team, “Black Hat Attendees See AI as Double-Edged Sword”, August 1, 2017, accessed January 2, 2019,
  4. Dan Patterson, “How Weaponized AI Creates a New Breed of Cyber-Attacks”, TechRepublic, August 16, 2018, accessed January 2, 2019,
  5. Marc Ph. Stoecklin, “DeepLocker: How AI Can Power A Stealthy New Breed of Malware”,Security Intelligence, August 8, 2018, accessed January 2, 2019,
  6. Infosec Institute, “How Criminals Can Exploit AI”, May 1, 2018, accessed December 26, 2018,
  7. Oscar Schwartz, “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die”, The Guardian, November 12, 2018, accessed December 26, 2018,
  8. James Vincent, “Watch Jordan Peele Use AI To Make Barack Obama Deliver a PSA about Fake News”, The Verge, April 17, 2018, accessed December 26, 2018,
  9. Kareem Ayoub and Kenneth Payne, “Strategy in the Age of Artificial Intelligence”, Journal of Strategic Studies 39, no. 5 (November 2015):816.
  10. SparkCognition, “Deep Armor: Endpoint Protection, Built from AI”, accessed December 26, 2018,
  11. Michell Christopher, “Artificial Intelligence in Thailand: How It Started and Where It’s Headed”, OpenGov Asia, July 12, 2018, accessed December 26, 2018,
  12. Nurfilzah Rohaidi, “How Thailand Is Using AI for Cybersecurity”, GovInsider, November 27, 2018, accessed December 26, 2018,
  13. Kai Shu, Amy Silva, Suhang Wang, Jiliang Tang and Huan Liu, “Fake News Detection on Social Media: A Data Mining Perspective”, Sigkdd Explorations by Association for Computing Machinery 19, no. 1 (June 2017): 22–36,
  14. Remarks by Mr S Iswaran, Minister for Communications and Information, at the Media Literacy Council’s Launch of the Fake News Campaign, November 2, 2018, accessed December 26, 2018,
  15. Lianne Chia, “National Framework to Build Information and Media Literacy to be Launched in 2019: S Iswaran”, CNA, November 2, 2018, accessed December 26, 2018,
  16. Infocomm Media Development Authority (IMDA), “New Council to Oversee Cyber Wellness, Media Literacy Initiatives”, November 3, 2017, accessed 26 December 2018,

Back to Ethos homepage