Responsible Artificial Intelligence Institute (formerly AI Global)

Organization focused on building tangible governance tools for trustworthy, safe, and fair Artificial Intelligence (AI). Through a first-of-its-kind certification system that qualifies AI systems, we support practitioners as they navigate the complex landscape of creating Responsible AI. Feedback generated from these systems will in turn inform AI policymakers, enabling technologies that improve the social and economic well-being of society. RAI brings extensive experience in responsible AI policy and is uniquely positioned to partner with organizations across public and private sectors to guide and inform responsible AI governance around the world.

RAI Certification is a symbol of trust that an AI system has been designed, built, and deployed in line with the five OECD Principles on Artificial Intelligence to promote use of AI that is innovative and trustworthy and that respects human rights and societal values. We use our five categories of responsible AI (explainability, fairness, accountability, robustness, and data quality) as parameters for the different credit elements within the RAI Certification rating system.

The Responsible AI Fellowship Program invites multidisciplinary teams of students and professionals to tackle real-world challenges. This program will support multidisciplinary teams of students and professionals selected from various universities and businesses to work on real-world responsible AI challenges and opportunities. Projects are sourced from RAI Members and external clients who play an important role in structuring project deliverables. Fellows work with domain experts in AI, data science, human-centric design, law, and regulatory policy. Fellows receive training in research skills and data collection, analysis, and presentation to deliver a work product that meets client needs.