<aside>

WORKING DRAFT FOR PUBLIC FEEDBACK For more context on this draft, please see here. Please submit feedback here.

</aside>


Previous: Executive Summary

Table of Contents

Next: Use Case: Study of (New) Market Needs


Overview

This section outlines best practices that apply across all use cases and prioritized when designing stakeholder engagement strategies. For each “best practice,” there is a countervailing “guardrail” that explains inverse practices that are likely to cause harm to Participants and undermine ethical stakeholder engagement values. Considered against the “Analytic Framework for Understanding AI Stakeholder Engagement Strategies, you should be able to distinguish between stakeholder engagement approaches that bring tech companies in closer alignment with marginalized community needs, as opposed to further from those needs (in the direction of potentially causing additional harm).

Best Practices (and Guardrails)

Specify stakeholder groups

Design from the margins

Prepare in advance

Emphasize community voices

Align with the appropriate stakeholder expertise and knowledge

Clear and consistent communication about scope and process

Adaptability and willingness to iterate based on input

Strong documentation

Overview of Use Cases

The following sections provide examples (generalized from real situations reported publicly or from situations known to Task Force members) of both stakeholder engagement strategies and practices that are aligned with the stakeholder community’s mutual interest and benefit and those that are misaligned, including situations where the intention was to work collaboratively with stakeholders. They are broken out by different common situations where commercial AI development teams might seek stakeholder input:

Common Applications of Stakeholder Engagement during the AI Development Lifecycle