- OpenAI formed an internal Collective Alignment team to incorporate public input on AI safety into product guardrails.
- This builds on an open grants program funding experiments in democratically governing AI risks and ethics.
- Critics are skeptical if corporate priorities may filter public contributions despite OpenAI’s stated commitment.
Collective alignment team
OpenAI announced this week the creation of an internal Collective Alignment team focused on crowdsourcing ideas to ensure future AI systems “align to the values of humanity” and implementing top suggestions directly into product guardrails.
The forthcoming group builds on OpenAI’s public grants program, which started last spring and funded experiments in democratically governing AI decision frameworks around risks.
Grantees explored interfaces, auditing mechanisms, and mapping techniques to encode ethical principles into models.
Expansion of public grants program
OpenAI recapped grant recipient proposals in a blog post while open-sourcing all code to spur further research.
The company framed the initiative as pursuing industry oversight independently from commercial interests amidst growing regulatory scrutiny.
Navigating skepticism and criticisms
Critics argue OpenAI promotes self-regulation to stall government intervention that might limit its breakneck pace of research.
Nonetheless, its CEO regularly warns uncontrolled advanced AI could devastate society if steps aren’t taken.
Hence OpenAI now translates theoretical governance solutions into practice via engineers implementing a real-world funnel between public priorities and product direction.
But skepticism surrounds whether corporate priorities may filter contributions.