Macarena Bazan – In the midst of a notable surge in AI development, President Biden enacted an Executive Order purporting to fortify safety, security, and trust in the development and use of artificial intelligence. Relying upon the Defense Production Act of 1950, President Biden has created a national security-centered approach to AI regulation. While this 63-page Order establishes multiple directives, some of the primary areas of focus have been characterized as: setting rules and regulations for AI safety and security; advancing equity and civil rights; promoting innovation and competition; ensuring responsible government use of AI; and advancing American leadership abroad.
The Order defined artificial intelligence as any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,” pursuant to 15 U.S.C. § 9401(3). This inclusive definition expanded the scope of the regulation to apply beyond generative AI and create a potential impact across all sectors of the economy. Additionally, Biden characterized these efforts using an “all-of-government approach,” which will call upon multiple federal agencies, including but not limited to, the Patent and Trademark Office, the Department of Justice, the Federal Trade Commission, and even agencies that did not previously have a regulatory role like the National Institute for Standards and Technology.
While the Order was signed just a couple of weeks ago, it has introduced a nuanced landscape of both advantages and challenges. On the positive side, the Order takes significant strides in enhancing the safety and security of AI systems. By compelling developers of potent AI models to share safety test results with the government, the administration seeks to ensure robust evaluations before public release, mitigating potential risks to national security and public health. Privacy protection emerges as another pro, with the order encouraging Congress to pass comprehensive data privacy legislation. The directive recognizes the heightened risks to individuals’ privacy posed by AI systems and advocates for privacy-preserving techniques and technologies. Furthermore, the order acknowledges and addresses concerns related to algorithmic discrimination, providing clear guidance to prevent AI algorithms from exacerbating discrimination in areas such as housing and federal benefits programs.
However, the Order has encountered substantial criticism from industry-leaders, lawyers, and other professionals. First, critics contend that the government may lack both the knowledge and the capacity to create and manage such complex AI regulations, especially given the tight turnaround imposed on federal agencies and departments to implement the Executive Order’s directives. For example, the Order only provides the Secretary of Homeland Security with 270 days to develop a plan for multilateral engagements to encourage the adoption of the AI security regulations. Additionally, concerns persist about the practical impact of these regulations on the public. Some scholars argue that this Order is a mere “Band-Aid” solution, insufficient to resolve the intricate underlying issues. At the forefront of this concern is that businesses that reap significant benefits from AI will be tasked with self-reporting vulnerabilities regarding manipulation and misuse, creating an unreliable reporting method. Furthermore, the Order requests that federal agencies establish guidelines to include watermarking on AI information. However, scholars note the current absence of such reliable watermarking applications in the market, highlighting concerns about susceptibility to misuse.
Lawyers express an additional concern regarding the potential overreach of the Executive Order. The Defense Production Act of 1950, invoked by President Biden to issue this directive, has not previously been utilized for situations of this nature. Typically reserved for emergencies or prioritizing resources for national defense, the Act’s application to AI regulation prompts potential legal issues within the balance of power framework. Comparisons to previous administrations and their handling of transformative technologies, such as the Internet, underscore the perceived novelty and potential overreach in this executive action.
Now, what does this mean for businesses navigating the landscape shaped by this Order? First, companies engaging in AI development must be prepared for increased regulatory scrutiny and compliance requirements. The order mandates that developers of powerful AI systems, particularly those with implications for national security and public safety, share safety test results and critical information with the U.S. government. This calls for a heightened level of transparency and accountability in AI development processes. Businesses with existing AI risk management frameworks should assess whether they align with the standards outlined in the order and prepare for additional guidance. Additionally, businesses should start exploring options to take advantage of the incentives provided by the Order, like streamlined visa processing and additional funding to small businesses using and developing AI.
For small businesses, proactively crafting a plan to deal with the newly imposed burdens may be most beneficial. Given the additional reporting obligations and regulatory measures, it is likely that smaller enterprises will encounter legal complexities and heightened operational costs that could stifle their developments. While the Order aims to improve competition and stop unlawful collusion, larger corporations may be better equipped to navigate the intricacies of regulatory compliance, potentially gaining a heads-up in the industry. Moreover, businesses that provide AI services to federal agencies, will likely face potential contractual requirements due to the Order’s goal to ensure the responsible governmental deployment of AI.
Navigating the delicate balance between regulatory oversight and innovation, President Biden has addressed the expectations of many Americans who sought an active approach to AI regulation. The directive’s emphasis on avoiding broad bans in favor of a nuanced, case-by-case evaluation by federal agencies instills hope for a thoughtful and adaptive regulatory environment. Whether businesses are seasoned implementers or first-time adopters of AI, strategic consideration of their future moves has become paramount in the wake of this directive. As the AI industry steps into a new era with the introduction of this Order, it serves as a foundational milestone. However, the journey is just beginning. Ongoing dialogue and collaboration among government entities, businesses, and stakeholders will be imperative to cultivate responsible and effective AI development in the years ahead.