News  |  ,   |  September 19, 2018

Can AI Help Us Create More Inclusive Education and Society?

Blog article by Kaśka Porayska-Pomsta.
Published on the Alelo website.


Artificial Intelligence (AI) gets quite a lot of bad press. It is understood by few and feared by many. It is overhyped through media, sci-fi movies and eye-watering financial investments by the tech giants and increasingly – governments. As such AI does not yet sit comfortably with many who care about human development and wellbeing, or social justice and prosperity. This feeling of discomfort is fueled by increasing reports that so far, AI-enhanced human decision-making can amplify social inequalities and injustice, rather than address them. This is because AI models are based on data which harbour our (Human) biases and prejudices (e.g. racial discrimination in policing). The bad press is not helped by the growing perception that AI is somewhat a mysterious area of knowledge and skill – a tool mastered by a few initiated technological ‘high priests’ (mainly white male engineers) to spy on us, to take our jobs from us, and to control us.

In this context, it is not surprising that many non-specialists in AI, whose practices (and jobs) may be affected by its use, have at the very best, a lukewarm, skeptical attitude towards AI’s potential to provide a mechanism for a positive social change, and at the very worst, they vehemently oppose it. In short, for many people, the jury is still out on the extent and the exact nature of AI’s ability to work for the social benefit of all.

Given the emergent evidence of AI’s present potential to reinforce rather than narrow the existing disparities in our societies, does it make any sense to even consider AI as a potential tool for enhancing social benefit?

For many of us who research at the intersection of AI and the Social Sciences (; UCL KL), the answer is a resounding ‘yes’. The sheer revelation that comes with AI having already exposed a shockingly biased basis of our own decisions in many high-stakes contexts, such as law reinforcement, is a prime example of how powerful a mirror onto ourselves this technology can be. Already AI and ethics movement is so strong around the world as a consequence of this exposure, that governments as well as the technological giants are unable to ignore it. Some AI-for-social-good activists raise pertinent questions about what steps we can take as a people towards creating fairer and more inclusive world for ourselves, and what exactly we can and should demand from AI and its creators. Diversity in data used as the basis for the AI models, diversification of AI engineering workforce (e.g. to infiltrate the AI solutions with the perspectives of women, ethnic minorities, or the so-called neuro-atypical groups), as well as active participation of the wider public, social scientists and public practitioners, e.g. teachers, in the design, implementation and interrogation of AI technologies, have been so far identified as key to addressing the present concerns. [ . . . ]