Tech

Who’s going to save us from bad AI?


Damn time. That’s the response from AI policy and ethics to the news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory body, disclosed a AI’s Declaration of Human Rights. The document is Biden’s vision of how the US government, tech companies and citizens should work together to hold the AI ​​sector accountable.

It was a great initiative and long overdue. To date, the US is one of the only Western countries that does not have clear guidelines on how to protect its citizens against the harmful effects of AI. (As a reminder, these harms include wrongful arrest, suicidaland the entire group of students who are marked wrong by an algorithm. And that’s just for the beginners.)

Tech companies say they want to minimize these harms, but it’s really hard to calculate them.

The AI ​​Bill of Rights outlines five protections Americans should have in the age of AI, including data privacy, the right to protection from insecure systems, and ensuring that algorithms no discrimination and there will always be human alternatives. Read more about it here.

So here’s the good news: The White House has shown to think critically about the types of harms of AI, and this should filter out how the federal government thinks about technology risks more broadly. The EU is pressing with regulations ambitiously trying to minimize all harmful effects of AI. That’s great but extremely difficult to do, and it could take years before their AI law, known as the AI ​​Act, is ready. The US, on the other hand, “can solve one problem at a time,” and individual agencies can learn to handle AI challenges as they arise, said Alex Engler, who studies AI governance at the Brookings Institution. , a DC consulting organization said.

And the bad thing: The AI ​​Bill of Rights is missing some pretty important areas of harm, such as law enforcement and worker surveillance. And unlike the actual US Bill of Rights, the AI ​​Bill of Rights is more of an enthusiastic recommendation than a binding law. Courtney Radsch, technology policy expert at the US human rights organization, said: “Frank principles are not enough. “She added.

America is on a path. On the one hand, the US does not want to appear weak on the global stage when it comes to this issue. The US plays perhaps the most important role in mitigating AI harm, as most of the world’s largest and richest AI companies are American. But that’s the problem. Globally, the US has to lobby against rules that will put limits on its tech giants, and domestically it doesn’t like to introduce any regulation that could potentially “bargain”. become innovation”.

The next two years will be crucial for global AI policy. If Democrats fail to win a second term in the 2024 presidential election, these efforts will most likely be abandoned. New people with new priorities can dramatically change the progress made so far or take things in a completely different direction. Nothing is impossible.

kignews

Kig News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button