Our international workshop series «Towards an Inclusive Future in AI» generated more than 40 ideas highlighting a diverse set of understandings of «inclusion» as a principle to guide Artificial Intelligence (AI). Here are the preliminary findings.
In response to the accelerating potential of artificial intelligence to transform our lives, various governments, multilateral bodies, and other organizations have produced high-level strategies, principles, and guidelines on AI in recent years. A common conclusion is that facing the changes of the next century requires following the principle of «inclusion». Yet, this principle is more often proposed as an abstraction, without clear commitments—or even definitions—of where, how, and who is to be included.
To respond to this gap, the swissnex Network, foraus, and AI Commons launched the global campaign «Towards an Inclusive Future in AI» with foraus’ new Policy Kitchen methodology. This collaborative experiment resulted in 11 workshops in eight countries, involving ten partner organizations and about 120 participants from a wide range of perspectives, who generated 43 ideas for an inclusive future in AI. The preliminary output was presented at the AI for Good Global Summit 2019, following which an advisory board* reviewed and commented on a selection of ten promising ideas.
A question of definition
One of our main insights is best summarized by the idea that «to make AI inclusive, we should define ‘inclusive’». There appears to be considerable semantic ambiguity around «inclusion», suggesting the need for a clear, shared definition. The principle is understood to apply to very different domains, ranging from the development of non-biased algorithms and equal access to the technology to the inclusive governance of AI and «shared prosperity» more broadly.
A definition may also need to address the question of whom to include. Some ideas pointed to vulnerable populations in general. Others were more specific, categorizing people either by demographic characteristics, such as women, LGBTQ, youth and elderly; by economic status, such as different professions, the displaced workforce, students, or consumers; or by geopolitical context, like populations living in the Global South. Finally, the environment was also mentioned as a critical «stakeholder».
Making AI more inclusive
Participants proposed various initiatives towards a more inclusive future in AI, which can be broadly categorized in three clusters: inclusive governance; data, models and standards; and education.
Inclusive governance
«Stakeholder involvement in AI governance is not only the ethical thing to do, but is also a practical requirement for ensuring AI governance is appropriately adaptive and thorough», commented advisor Jessica Cussins Newman from UC Berkeley, the Future of Life Institute and The Future Society. That position was echoed by advisor Brandie Nonnecke from the CITRIS Policy Lab at UC Berkeley: «Supporting equitable and effective multi-stakeholder engagement in the governance of AI systems is critical. It happens all too often that we see the same powerful stakeholders taking a seat at the table.»
To address the issue of inclusive governance, participants suggested targeted initiatives such as an AI for Good Global Investment Fund, as well as more general approaches, such as crowdsourcing of AI policies, or creating a global standard for inclusive processes. The idea of a DAO AI Fellowship Program creating interfaces between grassroots and policy groups was lauded by Jessica Cussins Newman: «Too often it is assumed that knowledge and expertise only flow in one direction. In contrast to this flawed approach, this proposal is based on the premise that ensuring AI technologies are broadly beneficial requires sharing between different kinds of expertise.»
Data, models and standards
Decisions taken or supported by machines are at risk of being systematically biased towards certain populations. Hence, the underlying training data should be as bias-free as possible. Inclusion may be further strengthened by providing broader access to training data, for instance by creating a global repository to consolidate public and private data or by developing open standards. According to advisor Amir Banifatemi from XPRIZE and AI Commons, «the ability to offer access to diversity-based training sets is really important for both the machine learning community as well as those working on standards and rules of ethics.»
Education
Many participants voiced the need for education as a precondition for an inclusive trajectory of AI. The proposals ranged from user education by labelling AI products, life-long learning initiatives and local education hubs in the form of AI academies or small collaborative networks to an education-focused financing mechanism tapping into public and private funds. «Providing learning opportunities that are tailored toward young people so that they can learn about AI will be critical, especially in that we should consider youths facing increasing difficulties in finding long-term employment in many countries», commented Jonathan Andrew from the Geneva Academy, member of the Advisory Board.
Overall, our initiative shows that for AI principles and guidelines to have any actual significance, terms like inclusion may need to be accompanied with clearer definitions and operational recommendations. Inclusive global conversations may support this missing step.
Stay tuned for a more in-depth publication this Fall and get inspired by the full list of ideas.
*Jonathan Andrew (Geneva Academy), Amir Banifatemi (XPRIZE, AI Commons), Jan Gerlach (Wikimedia), Jessica Cussins Newman (UC Berkeley, Future of Life Institute, The Future Society), Brandie Nonnecke (CITRIS Policy Lab UC Berkeley), Malavika Jayaram (Digital Asia Hub, Berkman Klein Center), and Livia Walpen (Swiss Federal Office of Communications)
The people in these photographs do not exist — they were generated by a Generative Adversarial Network (GAN), a machine learning process for generating images, in this case human faces: https://thispersondoesnotexist.com/