Artificial intelligence has long evoked nightmare scenarios of sentient machines destroying humanity. While this scenario is still primarily a concern for science fiction writers, recent advances in AI technology have policymakers questioning whether AI regulation is necessary to avert more mundane risks, such as copyright infringement, discrimination, and privacy violations. These emerging threats have prompted the U.S. Chamber of Commerce to call on lawmakers to make regulating AI technology a priority.
After initially opposing regulation, the U.S. Chamber of Commerce is now urging the federal government to enact regulations that minimize the potentially harmful impacts of AI technology without restricting the technology's growth.
The report focuses on six key points:
The Chamber recommends avoiding a one-size-fits-all approach to regulation. Instead, they suggest developing flexible, industry-specific guidance and best practices.
In a recent interview, one of the inventors of ChatGPT suggested that regulation is necessary to shape how AI develops, rather than merely reacting to this technology. By regulating AI now, governments and industries can ensure that people use the technology in ways that benefit humanity.
Other advocates for regulation point out that without it, companies may prioritize financial and commercial interests over safety. Additionally, evidence of flaws and biases in AI technology raises concerns about the potential for discrimination when companies use AI tools for housing, hiring, and credit decisions.
AI technology also opens up companies that use it to various types of liability. For example, search engines that utilize AI technology could provide users with inaccurate information on important topics, such as medical care. Additionally, because AI technology pulls information from existing sources, copyright infringement and data protection are concerns.
There is no one governing body that controls all of the rules about the use of AI technology. Instead, national and local governments and industries must work together to create a regulatory framework.
The United States federal government has been slow to respond to calls to regulate AI technology, though it has been developing it for military applications since the 1960s. Legislators proposed bills to regulate facial recognition software and discriminatory algorithms in 2021, but none of these bills passed.
The Federal Trade Commission has taken action against companies for using AI in ways that violate consumer protection rules and proposed regulations to restrict the collection of data for use in AI technology. The Consumer Financial Protection Bureau has warned credit agencies that their use of AI systems could violate anti-discrimination laws.
Though the Food and Drug Administration is monitoring the use of AI technology in medical devices and the White House issued a blueprint for rules on AI, no laws have resulted from these actions. While federal regulation has made little progress, a few states have passed laws to regulate specific technologies, such as the use of AI algorithms in credit decisions.
The European Union proposed the AI Act in 2021 to regulate AI technologies, such as facial recognition and applications used in critical public infrastructure, that it believed could cause the most harm. The proposed AI regulation would mandate companies creating AI-driven products to perform risk assessments regarding their product's impact on safety, health, freedom of expression, and individual rights.
Companies in violation of the law could face fines of up to 6% of their total revenue. However, the rise of new technologies that do not have one specific use has created new legislative challenges.
China, the U.K., Japan, and other national governments are also tackling regulation in various ways. As AI technology continues to gain prominence, more governments are likely to pass regulations.
Self-regulation is likely to play a large role as more businesses incorporate AI into their processes. Auto companies are establishing safety programs to ensure that autonomous vehicles are not a safety hazard to customers and other drivers on the roads.
Companies, such as Monitaur, are developing AI governance solutions for industries, such as the insurance industry. These solutions seek to build trust between businesses that handle sensitive information and customers who may worry about the implications of AI for privacy and security.
The wide reach of AI technology makes it particularly difficult to regulate because it touches the lives of people around the globe. Additionally, many government officials do not understand the technology or its risks.
This lack of understanding has caused regulatory efforts to stall. As a result, AI technology is developing at a pace that outstrips regulation.
The team at Cloudficient is an expert at helping customers address compliance issues when migrating legacy systems. As Microsoft incorporates more AI technology into its products, we can help you navigate any compliance issues that AI regulation may create for your company when migrating to Microsoft 365. We remain focused on client needs and adjust our product offerings to match them. Contact us online to learn more.
With unmatched next generation migration technology, Cloudficient is revolutionizing the way businesses retire legacy systems and transform their organization into the cloud. Our business constantly remains focused on client needs and creating product offerings that match them. We provide affordable services that are scalable, fast and seamless.
If you would like to learn more about how to bring Cloudficiency to your migration project, visit our website, or contact us.