<aside>

WORKING DRAFT FOR PUBLIC FEEDBACK For more context on this draft, please see here. Please submit feedback here.

</aside>


Previous: Use Case: Expert Feedback and Guidance

Table of Contents

Next: Use Case: User Experience and Beta Testing


Overview

Individuals unfamiliar with the specific project - including people from other teams within the same organization, academics or other subject matter experts, users, advocates, and other general members of the public - may be invited to adversarially engage with a prototype or product/feature in development in order to identify likely risks and potential failures. For example, “red-teaming” exercises (a term borrowed from the military and cybersecurity fields) are opportunities to attack the system to identify ways in which it can be abused, misused, or fail to work as expected for different populations. There are other forms of crowdsourcing, such as “bug bounties” where users are awarded for identifying and reporting vulnerabilities with systems.

Example

A team based in a sub-Saharan country has created an AI chatbot to support survivors of domestic violence by connecting survivors with local resources and providing a place to document their experiences. Before launching, the developers want to identify any harms that might arise for users (e.g., survivors of domestic violence) by using this chatbot on their phones.

Practices

Use Case Practices


Previous: Use Case: Expert Feedback and Guidance

Table of Contents

Next: Use Case: User Experience and Beta Testing


© 2024 Partnership on AI | All Rights Reserved