<aside>

WORKING DRAFT FOR PUBLIC FEEDBACK For more context on this draft, please see here. Please submit feedback here.

</aside>


Previous: Automated Synthesis

Table of Contents

Next: Purpose of the Guidelines


Overview

While not always and explicitly AI-driven, there is a growing number of platforms designed to help Sponsors connect with large audiences of stakeholders (1,000+ participants) to elicit their values and beliefs about AI, its development, deployment, and overall governance. While typically applied to governance questions (e.g., the broader vision of AI people believe will make it safe or broadly beneficial to society). These platforms are optimized for large populations, may focus on representative sampling approaches, can pose open-ended prompts (e.g., “I want to see…”), and may use real-time (algorithmically-processed) review of the responses to facilitate the discussion. This tool is often applied to a distinct thread of participatory AI research and governance is found in “AI safety” and “alignment” research and tooling. Many AI research labs and technical researchers are concerned with the possibility that future powerful “superintelligent” systems (those with intelligence far beyond humans and networked into physical infrastructures) pose the risk of massive—or even existential—harm to the well- being of humanity and ecological conditions.

Strengths and Useful Applications

Potential Risks of Use


Previous: Automated Synthesis

Table of Contents

Next: Purpose of the Guidelines


© 2024 Partnership on AI | All Rights Reserved