<aside>

WORKING DRAFT FOR PUBLIC FEEDBACK For more context on this draft, please see here. Please submit feedback here.

</aside>


Previous: Platforms for Scalable Surveys

Table of Contents

Next: Who These Guidelines Are For


What This Resource is For

This resource is aimed at people working directly on the design and development of AI-driven technological solutions - especially within the commercial, corporate, or for-profit sector. It provides guidance on how to build collaborative working relationships with organizations representing, and individuals of, socially marginalized communities. “Socially marginalized” is meant to be defined within specific localized contexts, as different cultures and parts of the world may have different dimensions of difference shaping its social hierarchies and dynamics. So, while the Global Task Members responsible for the initial draft are predominantly based in the United States and United Kingdom, this resource is drawn from both their expertise having worked with communities outside of the “West”, as well as their own lived experiences. As such, it is meant to serve as a guide for working with stakeholders in these countries and in very different cultural contexts, such as stakeholders based in Global Majority nations.

There are many different ways to solicit feedback from employees, clients, customers, users, or members of the general public.

Given the complexities of both developing AI systems and engaging stakeholders and the broader public, the resource cannot address every possible pain point and scenario. The intention is to provide a broad-but-specific framework that can support practitioners in better understanding their own unique circumstances under which they are trying to integrate greater stakeholder input and navigate the common challenges that arise so that the stakeholder engagement process they design and implement remains authentic and as equity-oriented as possible. There are substantial limitations to how much benefit can be generated and how much harm can be mitigated, but the framework should support individuals’ efforts to bring their work into greater alignment with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face.

Finally, it cannot be emphasized enough that **stakeholder engagement is not a fix for broader social and economic issues related to AI and other digital technology**, especially their impact on socially marginalized communities. Working with communities can help developers better understand the harms, risks, and failures (as well as possible opportunities and positive impacts), but it also cannot eliminate or lessen harms that are sometimes inherent to the technology or the problems that the technology is trying to address. For example, there is growing evidence that operating large-scale AI systems comes as a major environmental cost (e.g., the water needed to cool computing systems); the environmental damage cannot be undone by conducting stakeholder engagement activities, but it may be possible to better understand the expansiveness of the environmental impact and jointly make difficult decisions about how to oversee and govern the use of AI systems to mitigate the environmental damage.

What We Mean By “Stakeholder Engagement”

The Current Situation

Pace of Development & Deployment

There is a rapidly growing amount of investment going to AI development driven by competition between companies and countries. Technology companies seek to build more advanced systems that reach global audiences. Countries are in a geopolitical race in a race towards AI hegemony by advancing AI innovation in their countries through large investments and different AI governance policies. This has led to the hasty deployment of a broad range of AI products. This pace increases the risk of deployment without sufficient safety mechanisms in place before, during, and after a system is released. Stakeholder engagement is a key part of ensuring these systems are relevant and, at the very least, avoid harm. However, the pace of development risks leaving meaningful stakeholder engagement out of the process.