Artificial intelligence (AI) has become a significant policy issue in the United States, with policymakers focusing on supporting and regulating AI platforms as they become more prevalent. The Biden Administration and Congress are working to strike a balance between promoting U.S. global leadership in AI and addressing the risks associated with the technology. This article will explore the legislative and regulatory developments surrounding AI in the U.S., including the upcoming House hearing on AI deepfakes.
Bipartisan interest in AI regulation
Over the past year, AI has garnered bipartisan interest and support in Congress. House and Senate committees have held nearly three dozen hearings on AI, and more than 30 AI-focused bills have been introduced. Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation. The Biden Administration has also taken steps to promote responsible AI development and deployment.
Challenges in passing comprehensive AI legislation
Despite the bipartisan interest in AI regulation, passing comprehensive AI legislation remains a challenge. No consensus has emerged on the substance or process of AI regulation. Different groups of members are developing their own versions of AI legislation through different procedures. In the House, a bipartisan bill proposes creating a blue-ribbon commission to study the issue and make recommendations, effectively punting the issue of comprehensive regulation to the executive branch.
House hearing on AI deepfakes
One of the targeted AI bills introduced in Congress focuses on addressing the impact of AI on U.S. elections, specifically deepfakes. Deepfakes are AI-generated audiovisual content that appropriates the voice and likeness of individuals without their consent. The growth of AI has raised concerns about the use of deepfakes in elections and other contexts.
In response to these concerns, the House is holding a hearing on AI deepfakes. Representative Nancy Mace, a member of the House Committee on Oversight and Reform, has previewed the hearing and emphasized the importance of AI guardrails to protect democracy. The hearing will explore the risks and potential solutions related to deepfakes, with the goal of informing future legislation and regulations.
Comprehensive bipartisan bills and frameworks
In addition to targeted AI legislation, three versions of comprehensive AI regulatory regimes have emerged in Congress. Senate Majority Leader Chuck Schumer proposed the SAFE Innovation Framework, which aims to boost U.S. global competitiveness in AI while ensuring appropriate protections for consumers and workers. The framework includes policy principles such as security, accountability, foundations, explainability, and innovation. Schumer also announced a new procedural approach, involving closed-door sessions with Senators and key stakeholders, to educate policymakers on AI.
Senators Richard Blumenthal and Josh Hawley introduced their own framework for AI regulation, focusing on transparency and accountability. Their framework proposes specific policies, including the creation of an independent oversight body, the elimination of Section 230 immunity for AI-generated content, increased national security protections, and consumer protections.
A bipartisan group of House members introduced the National AI Commission Act, which would establish a bipartisan commission of experts to review the U.S.’s current approach to AI regulation and make recommendations for a risk-based AI regulatory framework.
AI has become a significant policy issue in the U.S., with policymakers working to strike a balance between promoting U.S. global leadership in AI and addressing the risks associated with the technology. While passing comprehensive AI legislation remains a challenge, targeted AI bills and frameworks have been introduced in Congress. The upcoming House hearing on AI deepfakes, previewed by Representative Nancy Mace, will contribute to the ongoing discussions and inform future legislation and regulations.