On Oct. 30, the White House released a proposed executive order that would apply limits to the development of Artificial Intelligence (AI). The limits this executive order establishes are to regulate AI development so that it is safe, secure, and with good moral intentions.
AI has emerged over the past several years in the form of AI chatbots and text generators. ChatGPT made quite the splash when it was released with its potential to generate text for virtually any prompt. Even companies like Snapchat have released AI chatbots for all users. In my experience, AI has been for the most part used for entertainment purposes. Mostly giving ChatGPT silly prompts to write a short essay on or giving an art generator a prompt to generate a goofy image or photoshop an already existing photo. This is pretty harmless so far but the administration seeks to render AI development safe before it becomes potentially harmful.
“Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.” According to President Joe Biden’s written briefing room on the executive order.
Though the formal rules for this executive order have not yet been written, the main goal is to create systematic tests that ensure AI development is safe. One of the goals being that AI systems cannot be used to produce weapons of the biological or nuclear variety or otherwise threaten national security or the economy. This action primarily targets the larger AI situations that could occur in the future rather than the more current concern of misinformation about world events possibly spreading. It also faces some issues beyond merely creating rules.
The New York Times reports, “Officials said that some of the steps in the order would require approval by independent agencies, like the Federal Trade Commission.” Additionally, “The order affects only American companies, but because software development happens around the world, the United States will face diplomatic challenges enforcing the regulations.”
The White House’s goal is to use the Federal Trade Commission as a sort of enforcer of this executive order as well as to create basic safety guidelines to follow. Itcan should be noted that some of these requests are unenforceable by the White House as they lack the authority to give directions to independent agencies to create regulations.
AI is a complicated issue that we face. While it so far has not been used in bad faith on a very large scale, there is the looming threat that it could, especially as deep fake technology develops. One of the proposed regulations recommends that a watermark be added to AI generated content which would reduce the spread of misinformation. I think this is a good initiative so far as it may be able to help safely denote if anything AI generated is for entertainment or more dangerous purposes. It not being required can also lead to less backlash, I can see a possibility of required watermarks being disputed as a violation of the freedom of speech. This is the furthest any country has gone to propose AI regulations. I believe the White House will have to tread carefully on this slippery slope to best serve the American people.