Reports  |    |  March 21, 2019

Artificial Intelligence and Robotics for Law Enforcement

Report published by the United Nations Interregional Crime and Justice Research Institute’s (UNICRI), Centre for Artificial Intelligence (AI) and Robotics, and Innovation Centre of the International Criminal Police Organization (INTERPOL). 44 pages.

Executive Summary:

The first Global Meeting on the Opportunities and Risks of Artificial Intelligence and Robotics for Law Enforcement was organized by INTERPOL’s Innovation Centre and the United Nations Interregional Crime and Justice Research Institute (UNICRI), through its Centre for Artificial Intelligence and Robotics, and took place in Singapore on 11 and 12 July 2018. Participants of this first meeting were actively involved in presentations on insights and foresight of AI and robotics, three open discussions (break-out) sessions, and six live demonstrations of the latest innovations and new technologies in the application of AI and robotics.

The key findings of this meeting were:

  1. AI and robotics are new concepts for law enforcement and there are expertise gaps which should be filled to avoid law enforcement falling behind.
  2. Many countries are exploring the application of AI and robotics in the context of law enforcement. Some countries have explored further than others and a variety of AI techniques are materializing according to different national law enforcement priorities. There is, however, a need for greater international coordination on this issue.
  3. In general, the law enforcement community is modest about its national capacities but is eager to develop its experience and capabilities.
  4. Some interesting examples of AI and robotic use cases for law enforcement include:
    • Autonomously research, analyze and respond to requests for international mutual legal assistance
    • Advanced virtual autopsy tools to help determine the cause of death
    • Autonomous robotic patrol systems
    • Forecasting where and what type of crimes are likely to occur (predictive policing and crime hotspot analytics) in order to optimize law enforcement resources
    • Computer vision software to identify stolen cars
    • Tools that identify vulnerable and exploited children
    • Behaviour detection tools to identify shoplifters
    • Fully autonomous tools to identify and fine online scammers
    • Crypto-based packet tracing tools enabling law enforcement to tackle security without invading privacy
  5. Several use cases in law enforcement are already in different stages of development. Some are still in a concept stage, while others are in prototyping, evaluation, or already approved for use.
  6. AI and robotics will significantly enhance law enforcement’s surveillance capabilities and, as this occurs, it will be necessary to address privacy concerns associated with these technologies, including issues such as when and where it is permissible to use sensors.
  7. In general, discussions on the ethical use of AI and robotics need to take place, in particular as law enforcement increasingly touches upon the lives of citizens. Law enforcement should take steps to ensure fairness, accountability, transparency and that the use of AI and robotics is effectively communicated to communities.
  8. It is also important to advance understanding of and prepare for the risk of malicious use of AI by criminal and terrorist groups, including new digital, physical, and political attacks. Possible malicious uses include AI-powered cyber-attacks, proliferation of fake news, as well as face-swapping and spoofing tools that manipulate video and endanger trust in political figures or call into question the validity of evidence presented in court.
  9. The social impact of using AI and robotics in law enforcement is also high and it is advisable to better understand what it will mean for law enforcement’s perception in the communities in which they operate.
  10. Law enforcement needs to continuously monitor the new technology landscape to ensure preparedness. The INTERPOL Police Technology and Innovation Radar, a world-wide overview of new, emerging technologies and their use in police practice are collected, can support in this endeavour.
  11. Law enforcement agencies should dedicate time to identify, structure, categorize and share their needs in terms of AI and robotics, so as to facilitate the development of projects.
  12. The future of AI and robotics is challenging. The industry is growing exponentially and innovations such as quantum computing are likely to further revolutionize the field. As an information activity based on gathering and acting upon information, AI is well-suited to contribute to enhancing law enforcement capabilities.
  13. The discussion paved way for recommendations in five areas, as well as four concrete suggestions for INTERPOL Member Countries’ Chiefs of Police concerning AI and robotics in the current and future policing landscape.

Excerpt from Section 2.3: Ethics, Ethics, Ethics!

Although there is a broad spectrum of potential law enforcement use cases, a common transversal theme associated with many of these use cases is enhanced surveillance capabilities. Of course, with any type of surveillance, the potential impact on the fundamental human right to privacy as recognized by the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR), as well as the numerous other international and regional legal instruments, is an essential consideration. Indeed, as the use of AI and robotics by law enforcement becomes more pervasive throughout society, touching ever more upon the lives of citizens, it becomes increasingly important for law enforcement to ensure that the use of these technologies is ethical.

However, what is ‘ethical’ is a complex issue and largely depends on the notion of ‘right and wrong’, which may differ according to philosophical subscriptions or contextual variations. What is considered ethical under one set of circumstances may not be considered ethical under others. Therefore, the task of ensuring ethical use of AI and robotics in law enforcement, and other domains for that matter, is not a straightforward one.

Part of the challenge with deciphering the ethical use of AI and robotics is that law enforcement and civil society come at this from different perspectives. The primary role law enforcement is, in essence, to protect the community and its citizens from harm and, in doing so, it must find a balance between security and privacy.

Law enforcement is, at the same time, not detached from either the community or its citizens, meaning that, should it overstep its boundaries through an alleged unethical behaviour or action, it exposes itself to be held accountable by the citizens they serve. Accordingly, law enforcement must carefully consider the use of AI and robotics, in particular with respect to the placement of sensors and the usage of data collected.

To respect citizen’s fundamental rights and avoid potential liability, the use of AI and robotics in law enforcement should be characterised by the following features:

  • Fairness: it should not breach rights, such as the right to due process, presumption of innocence, the freedom of expression, and freedom from discrimination.
  • Accountability: a culture of accountability must be established at an institutional and organizational level.
  • Transparency: the path taken by the system to arrive at a certain conclusion or decision must not be a ‘black box’.
  • Explainability: the decisions and actions of a systems must be comprehensible to human users.

To minimise the risk that the use of these systems by law enforcement may result in a violation of citizen’s fundamental rights, a number of entities have stepped in to try to ameliorate the ambiguity of legal liability surrounding the ethical use of AI and robotics in general and to better manage political optics by advocating for ‘ethics by design’ in AI and robotic systems. Notably, this includes initiatives taken by the Institute of Electrical and Electronics Engineers (IEEE) to issue a global treatise regarding the Ethics of Autonomous and Intelligent Systems (Ethically Aligned Design), to align technologies to moral values and ethical principles. The European Parliament has also proposed an advisory code of conduct for robotics engineers to guide the ethical design, production and use of robots, as well as a legal framework considering legal status to robots (“electronic personhood”) to ensure certain rights and responsibilities.

As these conversations on the ethical use of AI and robotics are taking place, it is important to step back and consider what it means to be human and which aspects of society should be maintained in a world with an increasing presence of AI and robotics.

Questions, such as whether society is ready for the use of facial recognition by law enforcement and the establishment of an extensive network of surveillance devices and sensors to become the norm, and to what degree society is willing to permit an increased law enforcement presence in their private lives, even if it is in the interests of public safety and security.

Law enforcement is an important test case with respect to privacy and ethics in the use of AI and robotics, predominantly because privacy is generally more likely to be trumped by security in the law enforcement community. If law enforcement can take the leadership, set norms and establish councils or bodies for the ethical use of AI and robotics, other communities may follow. Law enforcement also has the unique advantage to be discussing these issues before the use of AI and robotics becomes a common feature in law enforcement. If this opportunity is ignored and AI and robotics are used in law enforcement without fairness, accountability, transparency and explainability then the law enforcement community risks losing the confidence of the communities and citizens that it is mandated to protect. [ . . . ]

Additional Information: