As concerns about generative artificial intelligence (AI) technologies continue to grow, there is a growing call for regulation in order to address potential issues such as the spread of disinformation, job loss, loss of control over creative works, and even the potential extinction of the human species. While some governments have responded to these concerns by implementing regulations, others have taken a more hands-off approach.
In the United States, the White House issued an executive order titled “Safe, Secure, and Trustworthy Artificial Intelligence,” which aims to reduce immediate and long-term risks associated with AI technologies. This order includes guidelines for AI vendors to share safety test results with the federal government and calls for Congress to enact consumer privacy legislation to address the vast amount of data AI technologies collect.
As the push for AI regulation intensifies, it is important to consider which approaches are feasible. Two key considerations are technological feasibility and economic feasibility. This involves examining both the training data used in AI models and the output they generate.
One approach to regulating AI is to limit the training data to public domain material and copyrighted material for which the AI company has obtained permission. Technologically, this is feasible as AI companies have the ability to carefully select and use only approved data samples for training purposes. However, it may only be partially economically feasible as the quality of AI-generated content is influenced by the amount and richness of the training data. Some companies, like Adobe with its Firefly image generator, promote themselves by emphasizing that they exclusively use permitted content.
Another potential means of regulating generative AI is to attribute the output to specific creators or groups of creators so they can be compensated. However, the complexity of AI algorithms makes it impossible to determine which input samples contributed to the output, let alone the extent of their contribution. This issue of attribution is crucial as it will impact the acceptance or resistance of AI technology by creators and license holders. The Hollywood screenwriters’ strike, which lasted 148 days, highlights the importance of this issue and the concessions that were made to protect creators from AI-generated content.
In terms of technological feasibility, it is currently not possible to regulate AI at the output end. It is challenging to attribute AI-generated content to a specific AI vendor’s technology. However, one feasible solution could be the use of cryptographic signatures, a well-understood and mature technology, which could be applied to AI-generated content. AI vendors could cryptographically sign all output, allowing anyone to verify its authenticity. While this solution is technologically and economically feasible, it raises the question of whether relying solely on content from a few established vendors is desirable.
For policymakers, it will be crucial to carefully evaluate the costs and benefits of each regulatory approach. Understanding the technological and economic feasibility of these approaches will be essential in crafting effective and balanced regulations for AI technologies.
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it