China publishes code of ethics to regulate artificial intelligence, what would Isaac Asimov say?

0

This article was translated from our Spanish edition. Opinions expressed by Contractor the contributors are theirs.

Chinese Ministry of Science and Technology posted a code of ethics which aims to regulate existing or developing activities Artificial intelligence (AI) models. With this, the Asian country is ahead of Europe, which already had a prototype of regulation in the same direction. The guidelines of the Chinese guide give priority to “Full decision-making power” of humans on the machines, fully in line with the Laws of robotics from writer Isaac Asimov .

Depositphotos.com

Last April, the European Union presented draft regulations to ensure humans have control over AI. However, that did not materialize and China is now a pioneer in initiating regulations for these burgeoning technologies.

As reported by The South China Morning Post , the document entitled Ethical specifications for next-generation artificial intelligence starts from a very clear premise: “Ensuring that AI is always under the control of human beings” and that they have “Full decision-making power” About AI.

“At the end of the day, China opts for an authoritarian model, where the state thinks very seriously about the long-term social transformations AI will bring, from social distancing to existential risks, and actively tries to manage and guide these transformations. », Declared to the same media Rebecca Arcesati, analyst with the German think tank Mercator Institute for China Studies. “They have a remarkably progressive mindset ,” he added.

What does Isaac Asimov have to do with the Chinese code of ethics for AI?

In 1942, long before Artificial intelligence was a common theme, science fiction writer Isaac asimov released for the first time his famous three laws of robotics in the short story ‘The vicious circle’.

These “laws” were essential in Asimov’s work and served as the basis for everything related to this genre: novels, films, series, comics … ethical guide for androids to interact with humans without being harmful.

  • First law. A robot will not harm a human being nor, by inaction, allow a human being to be injured.
  • Second law. A robot must carry out orders given by human beings, except those which conflict with the First Law.
  • Third law. A robot must protect its own existence as long as this protection does not conflict with the First or Second Law.

The orientations proposed by Isaac asimov are intended for protect humans in the hypothetical case where machines could rebel and attack their creators . If a robot, which we now know would work with Artificial intelligence , attempted to disobey these laws, his system would go into “self-destruct” mode.

In other words, the three laws of robotics represent the moral code of an AI and are so ingrained in popular culture that they probably inspired the new Chinese regulations.

The six points of the Chinese code of ethics for next-generation artificial intelligence

The document describes six basic principles for artificial intelligence systems , in particular by ensuring that they are “controllable and reliable . “The other points relate to the use of these technologies to improve human well-being ; promote fairness, transparency and justice ; protect privacy and security ; and increase ethics education .

According to the code, users will have all right to accept or refuse the service of an AI , as well as stop interactions with these systems whenever they want.

They also raise the need to avoid risks in ensure that AIs have no vulnerabilities or security holes , and that they are not used in illegal or illicit activities who can compromise “national security” Where “the general interest . “

This last point is the most controversial, as it aligns with policies imposed by the Chinese government last year, aimed at more control over the country’s technology sector .

In reality, China recently launched against content recommendation algorithms , the vast majority based on AI systems. They collect and analyze user data to target advertising or determine what content is presented to them in their inboxes or search engines.

For this reason, Arcesati claims that the publication of the code is “a clear message” for tech giants such as Amazon , Facebook , Google and all the companies that “based their business model on recommendation algorithms . “

It should be remembered that one of China’s goals is to be a leader in Artificial Intelligence by 2030 , the release notes. Perhaps this is why the urgency to ethically regulate all current and future types of AI.

Meanwhile, in the European Union , militant groups fear that Artificial intelligence goes to be used for the purpose of authoritarian or mass surveillance . In fact, one of its main requirements is that facial recognition systems are banned in its future AI regulations.



Source link

Leave A Reply

Your email address will not be published.