This is why it is time to address the moral dilemmas of artificial intelligence

Leading futurists and researchers founded the Future of Life Institute (FLI) in March 2014 to reduce the risk of disaster and existence to humanity from advanced technologies such as Artificial Intelligence (AI). Elon Musk, on FLI’s advisory board, donated $ 10 million to jump-start research on AI security because, in his words, ‘with artificial intelligence, we call the devil’. Since everyone these days is singing Hosanna and acting as a solution to almost all the challenges facing the industry or health care or education, why this warning story?

The perceived danger of AI is not only from autonomous weapons systems produced by countries such as the US, China, Israel and Turkey, which can track and target humans and property without human intervention. It deals with the proliferation of AI and such technologies for mass surveillance, adverse health interventions, controversial arrests and fundamental rights violations. Not to mention the vulnerabilities that dominant governments and businesses can artificially create.

AI came into the world spot in 1997 when he defeated IBM’s Deep Blue World Chess Champion Gary Kasparov. Assuming it was a game based on logic, we agreed that the outcome was inevitable. And the ability to instantly select the computer’s past games, figure options, and instantly select the most effective movement is greater than humans can do. When Google DeepMind’s AlphaGo program outperformed Lee Sedal, the best Go Player in the world in 2016, we learned that games can be easily learned even with AI intuition.

And, in 2018, when Google DeepMind researchers trained a neural network to solve the virtual maze, it naturally developed digital-like grid cells that mammals can use to navigate. Surprisingly, the grid was created by a cell-related code system – without any human intervention. As Stephen Hawking said ominously, ‘the short-term impact of AI depends on who controls it, and the long-term impact on whether it can control it.’

AI, AI, Sir

The United Nations Educational, Scientific and Cultural Organization (UNESCO) began developing a legal, global document on the subject, focusing on identifying the moral dilemmas that AI could create. There are situations where the search engine discusses how to become a real-life eco-chamber advocating biases and biases – such as when we search for ‘greatest leaders ever’ and just getting a list of male people. Or ambiguity when the car brakes and shifts the danger from the pedestrian to the occupants of the car to avoid the jivakar. Or when AI 346 Rembrandt paintings are used to study pixel by pixel, the best artists and connoisseurs can be fooled into creating a stunning, 3D-printed artwork using deep learning algorithms.

Then there is the AI-assisted application of justice in law, administration, judgment and arbitration. UNESCO’s quest to provide an ethical framework in which emerging technologies can greatly benefit humanity is indeed great.

Interestingly, computer scientists at the Vienna University of Technology (TU Wein) in Austria are studying Indian Vedic texts and applying them to mathematical logic. The idea is to develop logical tools to address deionic – duties and responsibilities – concepts such as prohibitions and commitments to enforce morality in AI.

Logicians at the Institute of Logic and Computation and the Austrian Academy of Sciences at TU Wein also collect theology, which explains the Vedas and suggests how to maintain harmony in the world to resolve many natural contradictions. In particular, since classical logic is less useful when dealing with ethics, dyontic logic, which is expressed in mathematical principles, must be developed to create a framework that computers can comprehend and respond to.

Isaac Asimov’s Iconic 1950 book, I, Robot, sets out three rules that all robots must program: three rules of robotics – 1. Do not harm or harm humans. 2. Obedience to human beings unless it violates the first law. 3. To maintain its own existence unless it violates the first or second laws. In a 2004 film adaptation, a major threat is envisioned – when AI-enabled robots revolt and try to enslave and control all humans, to protect humanity for their own good through their dialect.

Artificially real

In the real world, AI must be mobilized for the great purpose of being guided by the right human purpose so that it can serve to control the larger forces of nature, such as climate change and natural disasters that we cannot. Otherwise manage. AI should be a tool that plays in many ways, without boldly helping to destroy humanity. It is clear that the three rules of robotics must be enhanced so that the expanded algorithms help to respect AI engine privacy and not discriminate on the basis of race, gender, age, color, wealth, religion, power or politics.

We are seeing the mainstream of AI in the era of Exponential Digital Transformation. How we identify its future will shape the next stage of human evolution. The time is right for governments to confuse – to formulate equitable outcomes, risk management strategy and advance contingency plans.

Leave a Comment