Joe Biden issues an executive order to impose more protections on AI

Joe Biden issues an executive order to impose more protections on AI

Under President Biden’s new executive order aimed at mitigating the risks of the new technology, AI companies must report their security test results to the US government.

The White House unveiled a series of steps Biden is taking amid concerns that uncontrolled AI systems could pose a threat to safety and security, as well as disinformation.

A major concern is that AI will cause a wave of “deepfakes,” video and audio, that can spread wildly on social media, even if they aren’t real.

The executive order does not require AI-generated content to be labeled as such, but directs the Commerce Department to develop standards for authentication and watermarking. “Federal agencies will use these tools to give Americans confidence that the communications they receive from the government are authentic — and to set an example for the private sector and governments around the world,” the White House said.

Biden and Vice President Kamala Harris will appear at a White House ceremony today to unveil the executive order.

White House Deputy Chief of Staff Bruce Reed said in a statement that the EO’s action was the most impactful: “Every government in the world has ever been committed to the safety and trust of AI.” This is the next step in an aggressive strategy to do everything on all fronts to realize the benefits of AI and limit the risks.”

Other aspects of the Executive Order:

Test results. Citing the Defense Production Act, the administration said the order “requires companies that develop a base model that poses a serious threat to national security, national economic security, or national public health and safety to report it to the federal government.” ” the model.”, and must share the results of all red team security tests.”

Safety standards. The National Institute of Standards and Technology will set standards for “red team testing” before an AI system is released. Federal agencies will apply these standards to infrastructure and national security. New standards are also being developed for the screening of biological syntheses.

Privacy. The order calls for federal support to be prioritized for the development of “privacy-preserving techniques,” such as those that can train AI systems while “preserving the privacy of training data.” The contract provides for the funding of a research coordination network for the development of cryptographic tools. Federal agencies will also seek to strengthen their privacy protections given the risks of AI.

Biden is also expected to call on Congress to take further action, including in areas such as data privacy. Lawmakers have held hearings and proposed privacy legislation for years, but no progress has been made. However, Senate Majority Leader Chuck Schumer (D-NY) recently convened a series of “AI Insight Forums” to push for comprehensive legislation.

Civil rights. The order requires guidance for employers, state welfare programs and federal contractors to prevent AI algorithms from being “used to exacerbate discrimination.” The Department of Justice and federal civil rights agencies will also establish best practices for using AI in the criminal justice system.

Work. The regulation calls for the creation of principles and best practices to “minimize the harms and maximize the benefits” of AI for workers. It includes guidance to prevent employers from “underpaying workers, evaluating applications unfairly, or harming workers’ ability to organize.” The decision also requires a report on the possible impact of AI on the labor market. The impact of AI on the labor market was an issue during the recent WGA strike and the ongoing SAG-AFTRA strike.

Competition. Small developers will have access to technical support and resources, while the Federal Trade Commission will be encouraged to “exercise its authority” if antitrust and competition concerns arise.

The executive order does not address copyright, as a number of authors and publishers have sued OpenAI and Meta for violations of using protected works for training models. This raises the possibility that a judge or jury will ultimately determine the parameters of “fair use” of copyrighted material.

Last summer, Biden gathered a number of AI leaders at the White House to sign a voluntary commitment, including one in which they promised to develop watermarking tools. The obligations were voluntary and it is not entirely clear whether the federal government can enforce them through the FTC. Another promise was to have their systems evaluated by independent experts. Signatory companies include OpenAI, Amazon, Microsoft, Google and Meta.

There is more to come.

Source: Deadline

Leave a Reply

Your email address will not be published. Required fields are marked *

Top Trending

Related POSTS