AI Now Institute

An interdisciplinary research institute dedicated to understanding the social implications of artificial intelligence. The AI Now Institute acts as a hub for the emerging field focused on these issues.  Founded in 2017 by Kate Crawford and Meredith Whittaker, AI Now is housed at New York University, where it fosters vibrant intellectual engagement and collaboration across the University and beyond. Their research focuses on four core domains:

  1. Rights & Liberties
    As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.
  2. Labor & Automation
    Automation and early-stage artificial intelligence systems are already changing the nature of employment and working conditions in multiple sectors. AI Now works with social scientists, economists, labor organizers, and others to better understand AI’s implications for labor and work – examining who benefits and who bears the cost of these rapid changes.
  3. Bias & Inclusion
    Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.
  4. Safety & Critical Infrastructure
    As artificial intelligence systems are introduced into our core infrastructures, from hospitals to the power grid, the risks posed by errors and blind spots increase. AI Now studies the ways in which AI and related technologies are being applied within these domains and aims to understand possibilities for safe and responsible AI integration.

Why We’re Here
Artificial Intelligence systems are being applied to many arenas of human life – across major sectors such as education, health care, criminal justice, housing, and employment – influencing significant decisions that impact individuals, populations, and national agendas. But the vast majority of AI systems and related technologies are being put in place with minimal oversight, few accountability mechanisms, and little research into their broader implications. Currently there are no agreed-upon methods to measure and assess the social implications of AI, even as these systems are being rapidly integrated into core social institutions. To ensure that AI systems are sensitive and responsive to the complex social domains in which they are applied, we will need to develop new ways to measure, audit, analyze, and improve them.

Visit the Website at