425
Views

A few months ago, we talked about the open letter led by technology leaders calling for a halt and regulation of the advancement of artificial intelligence before HR recruiters were replaced by AIs and Tinder was filled with bots.

Unfortunately, that race has been progressing slowly but steadily. At the time of writing this post, Sam Altman, CEO, co-founder, and evangelist of OpenAI, testified before the Congressional Subcommittee on Judiciary in Washington about safety standards and the creation of a license for their systems. Just days ago, Bard, Google’s GPT chatbot, was launched with real-time internet connectivity. It’s mind-blowing!

In other words, the pursuit of capitalizing on profits with greater speed in the AI revolution continues unrestricted, just like the conquest of the Wild West. After all, we are venturing into unknown territory, and whoever arrives first will be perpetuated in history forever.

In the roller coaster of artificial intelligence, attempts at regulations and lack of information about potential dangers

In the roller coaster of artificial intelligence, attempts at regulations and lack of information about potential dangers

 

In this context, after his month-long world tour to speak with policymakers about technology, Sam Altman, along with his army of lawyers, was bombarded with questions about what we all want to know: Do artificial intelligences have the potential to change how people work, shop, and interact? What role should the government and companies play to benefit from the technology while also preventing potential threats? What safety nets do we have in a hypothetical worst-case scenario?

Before the debate on the first tentative legislation, the United States Congress prepared some key points.

 

Legislation

“I believe that regulatory intervention by the government should be critical in mitigating the risks of increasingly powerful models. For example, the U.S. government could consider a combination of legislation and requirements to test the development and deployment of AIs above a certain threshold of capabilities. What is happening in the Open Source community is incredible, but there are only a few providers who can make unique contributions.”

Scope

“OpenAI was founded on the belief that artificial intelligence has the potential to improve almost every aspect of our lives, but also knowing that it creates serious risks that we must work together to manage. We are here because people love this technology. We believe it can be a moment similar to the invention of the printing press, and we must work together to achieve it. OpenAI is an unusual company, and we set it up that way because AI is an unusual technology. We are governed by a nonprofit organization, and our activities are driven by our mission and our charter, which commit us to work towards ensuring the broad distribution of AI’s benefits and maximizing the safety of its systems. We seek to build tools that can someday help us make new discoveries and address some of humanity’s biggest challenges, such as climate change and curing cancer. Our current systems are not yet capable of doing these things, but it has been immensely rewarding to see many people around the world derive so much value from what these systems can already do today,” said Sam Altman, CEO of OpenAI.

Potential dangers

Regarding the worst possible consequences of AI, Altman stated, “I believe that employment and what we will all do with our time really matter. I agree that when we reach very powerful systems, the landscape will change. I’m just more optimistic in the sense that we are incredibly creative and find new things to do with better tools. That will continue to happen. My worst fears are that we cause significant harm to the world, and I think that can happen in many different ways, which is why we started the company. It’s why I’m here today, and why we have been here in the past and have been able to spend time with you. I think if this technology goes wrong, it can go very wrong, and we want to be vocal about it. We want to work with the government to prevent that from happening. We try to have a very realistic view of what the worst danger is.”

The Biden administration is considering artificial intelligence as a priority. While the Senate Majority Leader, Charles E. Schumer (D-N.Y.), has been developing a new framework for artificial intelligence that provides “transparent and responsible artificial intelligence without stifling critical and cutting-edge innovation,” the reality is that nothing concrete exists yet.

Categorías:
English

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *