Brenda Sharton |
Benjamin Sadun |
James Smith |
Such rapid advancement and deployment have raised significant concerns about the implications of this technology. These concerns include accountability, transparency and ethics, among others.
The concerns range from the mundane — such as concern over loss of intellectual property rights and the dissemination of false information — to the more dramatic, specifically that AI will cause the end of civilization.
In response, there has been a worldwide push from legislators, regulators and industry leaders for government action. The voluntary code of conduct, propounded by the U.S.-EU Trade and Technology Council, is one response to these calls for action.[1]
On May 30, more than 350 researchers, executives, former government officials and public figures endorsed the view that AI may present an existential threat to humanity.[2] The very next day, on May 31, the U.S. and European Union announced that they were working together on a voluntary code of conduct for AI.
This article details the proposed voluntary code of conduct, addresses its effectiveness and concludes with forward-looking considerations about AI rulemaking.
The Voluntary Code of Conduct
Against the various calls for swift action, the proposed voluntary code of conduct serves as an interim measure to provide temporary safeguards.
A draft is expected in the coming weeks, which stakeholders will have an opportunity to comment on, and a final proposal will likely be presented to G7 leaders in the fall.
Requirements for digital watermarking and external audits are among the many ideas being discussed for inclusion in the voluntary code of conduct. Additionally, expert groups will be established to focus on terminology, taxonomy classification and risk management.
Undoubtedly, by the time a draft is proposed, there will be a proliferation of other ideas to try to curb the risks associated with this nascent technology.
Effectiveness of the Voluntary Code of Conduct
The absence of enforceability and binding rules certainly will undercut the voluntary code of conduct's effectiveness in achieving the desired outcome of a responsible AI future.
Further, while the U.S.-EU initiative demonstrates a commitment to deliberate AI development, the differing views on AI regulation in the U.S., EU and more globally highlight the challenge of accomplishing a universal solution.
Frustrated with the slow pace inherent to consensus building, the EU has not waited on the world to act and has instead taken a proactive approach by advancing its own regulation: the EU AI Act.
The initiative has entered the final stage of the EU's legislative process and will likely take effect in the next three years.
The EU AI Act focuses on facial recognition and biometric surveillance, but will have broad applicability, covering anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.
The level of regulation will be determined by the risks posed by specific applications with outright bans on systems considered "unacceptable," such as real-time facial recognition in public spaces, social scoring systems like those in China, and predictive policing tools.
The legislation also imposes tight restrictions on high-risk AI applications, including systems used for voter influence and large social media platforms that recommend content. The EU AI Act emphasizes transparency requirements, such as disclosing AI-generated content, distinguishing deepfakes and providing safeguards against illegal content generation.
In the U.S., various government actors have proposed or announced AI-related policies or initiatives, but, to date, no comprehensive federal law has advanced or even gained traction.
AI has been the subject of congressional committee hearings, including on the U.S. Senate Judiciary Subcommittee on Privacy and Technology, which held a hearing on May 16 to explore oversight and rules for AI.
In addition, numerous regulators, including the Federal Trade Commission, have announced that they will be monitoring promises made by companies about their products when it comes to AI. The FTC in particular has kept good on this promise.
As the FTC enforcement actions on May 31 against Amazon.com LLC Alexa and Ring LLC demonstrate, companies that do not comply with the FTC's guidance and expectations regarding the use of AI could face "algorithmic disgorgement," a tool wielded by the FTC that requires companies to destroy any tainted or ill-gotten personal information as well as the algorithms created by such personal information.
The FTC remains committed to championing consumer protection matters vis-à-vis new and evolving technologies. For example, its October 2022 advance notice of proposed rulemaking explored rulemaking for "automated decision-making systems."
In addition, the FTC has recently launched an Office of Technology to expand its expertise in this area.[3] Furthermore, in early 2023, the FTC published guidelines on AI usage, which provides insight into the FTC's expectations for organizations deploying AI tools.
Given the absence of a comprehensive federal law, various state legislatures have attempted to fill this gap by including AI measures in recently passed state privacy laws.
For example, the California Consumer Privacy Act — as amended by the California Privacy Rights Act — Colorado Privacy Act, Connecticut Data Privacy Act and Virginia Consumer Data Protection Act give residents who are covered by these laws the right to opt out of automated decision making, which generally includes technology that uses AI tools.
Further, during its June 14 meeting, the California Privacy Protection Agency's new rules subcommittee discussed potential language and key considerations addressing automated decision-making technology.
This could include an opt-out right that would include not only automated or final decisions made using AI technologies, but also any technological process that uses "as whole or part of a system to make or execute a decision or facilitate human decision making."
In addition to regulators, as is typical in the United States, the private plaintiffs bar also is attempting to hold companies accountable for alleged dangers associated with AI, by filing a consumer class action in federal court in California against Microsoft Corp. and OpenAI, the owners of ChatGPT, purportedly on behalf of an astronomically sized class: all who use or have ever used the internet. We are unaware of similar litigation filed outside the United States.
Other countries have also taken various initiatives ranging from guidance to more binding legislative initiatives. The U.K., for example, unveiled a 10-year plan in 2022 through its National AI Strategy, and, in March 2023, a white paper from the Department for Science, Innovation and Technology and the Office for Artificial Intelligence was released outlining a principles-based approach to AI regulation. U.K. regulators are expected to issue nonstatutory guidance within the next year.
On July 13, the Cyberspace Administration of China published "Interim Measures for the Management of Generative Artificial Intelligence Services" set to take effect on Aug. 15, with a focus on services available to the general public.
The Association of Southeast Asian Nations is developing governance and ethics guidelines for AI, aiming to establish "guardrails" for the rapidly advancing technology. Similar to their European and U.S. counterparts, policymakers in the region have voiced concern about AI's potential to facilitate the proliferation of misinformation. Drafting is expected to be completed by the end of this year.
The different policy approaches outlined above demonstrate that while Europe is progressing toward finalizing the EU AI Act, other governments are still in the early stages of determining the most appropriate regulatory approach.
This has, for example, raised concerns with European companies arguing that the proposed EU AI Act could impede Europe's competitiveness and technological sovereignty, potentially driving innovative companies abroad and discouraging investors from supporting the development of AI within Europe.
As AI transcends borders, it will be crucial to adopt a global approach and establish universal principles to ensure consistency, foster cooperation, and address potential challenges and ethical considerations posed by AI technologies on a worldwide scale — and importantly to create legal certainty for businesses. Experience has taught us this is much easier said than done.
Lawmakers in the U.S. face a host of political challenges in passing legislation to regulate burgeoning technology, as evidenced by the lack of even a federal privacy law, despite attempts over almost two decades to do so.
The initiators of the voluntary code of conduct appear to at least recognize the divergency of concerns and the challenge to a universal solution. When announcing the voluntary code of conduct, European Commission executive vice president Margrethe Vestager said:
We think it's really important that citizens see that democracies can deliver … and to do that in the broadest possible circle — with our friends in Canada, in the UK, in Japan, in India, bringing as many onboard as possible.
More voices, however, will mean even more division that will in turn translate into a watered-down final set of guidelines.
Looking Over the Horizon
Continued scrutiny of regulatory initiatives in the AI domain are essential to ensure a carefully balanced framework that promotes innovation while addressing potential risks.
Achieving this delicate equilibrium requires continued examination and adaptation to effectively navigate the complexities of AI regulation.
Just as AI will constantly evolve and never slow down, so too must regulation. As the great Stephen Hawking said:
The real risk of AI isn't malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.
A proactive, coordinated and informed regulatory scheme that transcends borders will be important in ensuring that AI is ultimately a tool for good rather than a doomsday prophecy.
Brenda Sharton and Benjamin Sadun are partners, and James Smith is an associate, at Dechert LLP.
Dechert associates Marjolein De Backer and Isabella Egetenmeir contributed to this article.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] The US and EU established the TTC in 2021, which fosters collaboration between the EU and US in trade and technology matters. Its mandate encompasses strengthening transatlantic ties and addressing common issues, including fostering trustworthy AI.
[2] The letter, hosted on the website of the Center for AI Safety, a non-profit organization focused on AI safety research and advocacy for safety standards, emphasized the need for global prioritization in mitigating the risks associated with AI, equating it with other societal-scale risks that included global pandemics and nuclear war.
[3] April J. Tabor, Trade Regulation Rule on Commercial Surveillance and Data Security, Fed. Trade Comm'n, 16 CFR Part 464, https://www.ftc.gov/system/files/ftc_gov/pdf/commercial_surveillance_and_data_security_anpr.pdf (last visited July 25, 2023).
For a reprint of this article, please contact reprints@law360.com.