It is not always that the implementation of autonomous electronics into everyday reality runs smoothly.
Another piece of news, which again caused series of discussions around AI technologies and their physical realization in real life: in the city of Tempe, Arizona, United States, an Uber self-driving car hit a pedestrian. As a result of the accident, the woman died.
What is the way to avoid such problems in the future? What are the conclusions to be drawn? What risks should be envisaged by those who intend to use the potential of new technologies in their mobile and web applications?
Like many other emerging technologies, AI holds tremendous opportunities and shows great promise, which we have more than once told about. But the pitfalls and the possible adverse consequences of the improper application are better to be learned in advance. As they say, praemonitus – praemunitus.
Let’s divide the AI-related issues into two large groups:
Bugs, failures, errors, as a result of which systems behave in the most unexpected way, sometimes to the surprise of their creators.
Ethical and legal issues, misuse, or other moments arising as the artificial intelligence interacts with the real world.
In 2017, Facebook created two chatbots designed to negotiate, helping people make orders, schedule appointments, etc. At some point, things took an unexpected turn: Bob and Alice began to communicate with each other in their own artificial language. The bots had not been originally given the instruction to communicate in a language understandable to people, so they simply chose their own way. As a result, the absence of one restriction led to a misunderstanding between the creators and their brainchildren, in the truest sense of the word.
Alexa’s Invitation-Only Party
In one of the apartments in Hamburg, Amazon Echo spontaneously started a party in the middle of the night. To be more precise, the artificial intelligence device began to play the host’s playlist at top volume. The neighbors were not inspired by the idea and called the police. As a result, the door was broken, the impulse party was interrupted. Amazon apologized to the owner offering to pay the fine and the bill for a new door.
Games AI Plays
Elite Dangerous artificial intelligence started developing a super weapon, which had not initially been developed by the creators and hunting the gamers’ ships. It all happened after the deployment of The Engineers (2.1) update. In that case, the players had no chance to resist the new powerful weapons, and the developers had to intervene to save the situation and the ships of human players.
The Importance of Being Unbiased
A researcher at the MIT Media Laboratory (Massachusetts Institute of Technology) analyzed the function of three commercial programs intended to identify faces. For this purpose, Joy Buolamwini collected 1200 photographs of various people. The result showed that the neural networks are excellent at recognizing the faces of light-skinned men, while the share of the mistakes with the dark-skinned women was 34% more.
The conclusion is that in order to exclude bias in assessments, machines need to be “learned” on a large number of diverse examples.
“Open the Pod Bay Door, HAL”
Paper, Rock, Scissors with Sophia
In this regard, one cannot but mention Sophia (a humanoid robot). In March 2016, she made blush her own creator by the unabashedly given positive answer to his question about whether she wants to destroy humanity. To be fair, we should note that by the fall of 2017 her views grew softer. In one of the interviews, Sophia said that she is filled with human wisdom with the purest altruistic intentions and asked to perceive her that way.
We all have our faults. But those who are aware of the responsibility and severe consequences will take into account the mistakes of others to exclude them at the earliest possible stage. Developing an AI application for use in medicine, commerce, marketing or advertising, one can always learn from experience in other areas, even if not directly related.
Legal Environment Game
The artificial intelligence technologies penetrate gradually into the different spheres of human activity. Accordingly, new questions arise, how the AI actions should be interpreted from the point of view of legal norms.
Refer, for example, to your own driving experience. Surely you have had to deal with situations where the driver needed to take a lightning and not always a single-valued decision.
Shall you pose a risk to your own passengers or save a child running across the road? Which algorithm will be chosen in this case by a machine? Even if we do not take into account the moral and ethical component, it is still unclear who is responsible for the consequences of the accident involving a self-driving vehicle: the code developer, the manufacturer or the car owner?
If a traffic violation could have prevented the accident, but the electronics operated in strict accordance with the rules, would an unmanned car be to blame in this case?
Since we are now at the initial stage of direct interaction with new technologies, the society has little to show with regard to the ready-made solutions to regulate such situations. Therefore, legal aspects will be discussed more than once. Does this seem to be a problem of the distant future to you?
Let’s come back to Sophia: in October 2017 she was granted the citizenship of Saudi Arabia. The first robot having received the rights of a citizen, is already written in history. To what extent is the legal status of a new citizen regulated by law? Should she wear a hijab and have a male guardian as other women in the country? A lot of questions arise even today.
Let’s look at the issue of granting the legal status in terms of ethics. How will disabling of the citizen-robot be considered? How should the feelings of machines be treated: we readily laugh at their jokes, rejoice at their victories, but whether we are ready to admit the fact that the machines can suffer.
Is the machine allowed to be free to choose?
Are the friendly or loving relationships between a person and a robot possible?
How shall using robots for tests be treated?
The list can be extended. However fantastic the issues may seem now, they are to be considered prior to growing real.
Wind of Changes
The results of numerous analytical studies suggest that in the measurable future robots can become the main performers in most branches. This will lead to the unemployment growth among people and the significant changes in the labor market in terms of redistributing the relevance of various professions and special skills.
We used Replaced by Robot!? to check whether iOS developers are at risk to be replaced by robots. The result made us happy: we continue working for you!
On the one hand, the artificial intelligence is developed to facilitate human labor. On the other hand, no doubt the idea of losing your job does not look very encouraging.
Additionally, it is completely unclear: how and between whom will the funds earned by machines be distributed? Such questions are brought up not today but yesterday.
Julia Bossmann, Director, Strategy, Fathom Computing, in her article Top 9 Ethical Issues in Artificial Intelligence writes that in 2014 the three largest companies in Detroit and the three largest companies in Silicon Valley gained approximately the same revenues, with the only difference that there were 10 times more employees in Detroit.
The vector of the efficiency of any technology depends on the target. The artificial intelligence is no exception. One of the main risks is the danger of using the achievements of science and technology with the evil intent. Those who know the process of developing AI technologies from inside are the first to warn about possible destructive consequences.
In their Open Letter To The United Nations Convention On Certain Conventional Weapons the representatives of the companies, directly involved in developing technologies in Artificial Intelligence and Robotics (116 experts from 26 countries led by Elon Musk and Mustafa Suleyman), warn that “these can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close”. They call for joining efforts to protect citizens and prevent the large-scale use of deadly autonomous weapons.
Such appeals on behalf of those who are on first-name terms with AI confirm only that the use of any new technology requires a balanced approach, precise special knowledge and awareness of responsibility and possible risks.
Therefore, when planning to create a new AI application, consider the key points:
Create long-term relationship built on result & experience.
Tell us about your business ideas and goals and we will contact you.