Search
AI

OpenAI and Google DeepMind Employees Warn About AI Risks

An open letter from a group of current and former employees at AI companies, including OpenAI and Google DeepMind, has raised concerns about the risks posed by emerging AI technology. This letter adds to the growing calls for addressing safety concerns around generative AI, which can rapidly produce human-like text, images, and audio at low cost.

The letter, signed by 11 current and former OpenAI employees and one current and another former Google DeepMind employee, asserts that the financial motives of AI companies hinder effective oversight. “We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter states.

The signatories warn of risks associated with unregulated AI, including the spread of misinformation, loss of independent AI systems, and deepening inequalities, potentially leading to “human extinction.”

Researchers have identified instances of image generators from companies like OpenAI and Microsoft producing voting-related disinformation despite policies against such content. The letter criticizes AI companies for having “weak obligations” to share information with governments about their systems’ capabilities and limitations, suggesting that these firms cannot be relied upon to voluntarily share such information.

The group urges AI firms to establish processes for current and former employees to raise concerns about risks and to refrain from enforcing confidentiality agreements that prevent criticism.

Separately, OpenAI reported that it disrupted five covert influence operations that attempted to use its AI models for deceptive activities online.

Login