Senate Democrats Seek Answers from OpenAI on Safety and Employment Practices

In a bid to ensure the safe and responsible development of artificial intelligence, five prominent Senate Democrats have sent a letter to OpenAI CEO Sam Altman. The letter, signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr., follows recent reports questioning OpenAI’s adherence to its stated goals of AI safety and ethical employment practices.

AI Safety and National Security

The senators highlighted the critical importance of AI safety for national economic competitiveness and geopolitical standing. Notably, OpenAI’s partnerships with the US government and national security agencies to develop cybersecurity tools underscore the necessity of secure AI systems. “National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable,” the letter states.

Key Areas of Inquiry

The lawmakers have requested detailed information on several key areas by August 13, 2024. These include:

  1. AI Safety Research Commitment: OpenAI’s dedication to allocating 20% of its computing resources to AI safety research.
  2. Non-Disparagement Agreements: The company’s stance on non-disparagement agreements for current and former employees.
  3. Employee Concern Procedures: Mechanisms for employees to raise cybersecurity and safety concerns.
  4. Security Protocols: Measures to prevent theft of AI models, research, or intellectual property.
  5. Supplier Code of Conduct Adherence: Compliance with the Supplier Code of Conduct, particularly regarding non-retaliation policies and whistleblower channels.
  6. Independent Expert Testing: Plans for pre-release testing and assessment of OpenAI’s systems by independent experts.
  7. Government Testing Commitment: Willingness to make future foundation models available to US Government agencies for pre-deployment testing.
  8. Post-Release Monitoring: Practices and learnings from monitoring deployed models.
  9. Public Impact Assessments: Plans for releasing retrospective impact assessments on deployed models.
  10. Voluntary Safety Commitments: Documentation of compliance with voluntary safety and security commitments to the Biden-Harris administration.

Context of the Inquiry

The inquiry by the Senate Democrats addresses recent controversies surrounding OpenAI, including reports of internal disputes over safety practices and alleged cybersecurity breaches. The senators specifically question whether OpenAI will “commit to removing any other provisions from employment agreements that could be used to penalise employees who publicly raise concerns about company practices.

AI Regulation and Future Implications

This congressional scrutiny comes at a time of increasing debate over AI regulation and safety measures. The letter references the voluntary commitments made by leading AI companies to the White House last year, framing them as “an important step towards building this trust” in AI safety and security. The response from OpenAI to these inquiries could significantly impact the future of AI governance and the relationship between tech companies and government oversight bodies.

Kamala Harris, who may be the next US president following the upcoming election, has emphasized the need for action against AI-enabled myths and disinformation. Chelsea Alves, a consultant with UNMiss, commented on Harris’ approach to AI and big tech regulation, noting its potential to set new standards for navigating the complexities of modern technology and individual privacy.

Share this 🚀