The great dangers associated with the artificial intelligence

Each new technology gives a lot, but can also be used improperly. Today, artificial intelligence is used for its benefits, like improving the quality of medical diagnostics, looking for new ways to treat cancer or improving the safety of cars.

Unfortunately, since the possibilities of AI expand, it may become dangerous or be used with bad intentions.

A number of celebrities, including the legendary physicist Stephen Hawking and innovator Elon Musk believe that artificial intelligence can be dangerous. Microsoft founder Bill Gates also believes that there are reasons to be cautious about AI, but he can do a good job, if we treat it properly.

The Risks of Artificial Intelligence

Since technologies that we can attribute to artificial intelligence and related industries are developing at high rates, many are already thinking about potential dangers. That is why it is so important to discuss the possibilities of ensuring the safety of people from the actions of artificial intelligence and minimizing its destructive potential.

Despite the fact that most modern devices are designed to be a blessing to mankind, experts believe that any powerful tool can be used to harm if this tool falls into inappropriate hands.

Today, AI technology is at a very high level. There are tools for face detection, natural text processing, and internet orders.

Experts in the field are currently working on artificial intelligence, whose systems can perform all sorts of tasks that one person can do. And with a high probability it will surpass each of us.

In fact, the speed of AI is really incredible. There are now many devices working with artificial intelligence that make our daily life more comfortable and effective.

Despite the fact that the super-intelligence machine has not yet been invented, there are a number of legal, political, financial and regulatory issues that need to be resolved to be ready for the safe operation of such a device.

Despite the lack of such machine, artificial intelligence can already carry certain risks.

Autonomous weapons

Artificial Intelligence can be programmed to do something dangerous, such as managing autonomous weapons programmed to kill. And this is one of the real risks associated with AI.

Experts believe the nuclear race may eventually become a world-class struggle for autonomous weapons.

In addition to that fact, a more real threat is that such weapons can fall into the hands of governments, which do not really value human life.

If deployed once such weapons would be extremely complicated to shut down and under control.

Manipulation of public opinion

Social networks, thanks to automated algorithms, are very effective in targeted marketing. They know who you are, what you like and also become very good in suggesting what you think.

The investigation of Cambridge Analytica is still ongoing. They used data from the 500 million users of Facebook, to predict the outcome of the US presidential election in 2016 and the results of the referendum on Brexit in Britain. If the accusations are confirmed, it will show the vast possibilities of artificial intelligence in the field of manipulating public opinion.

By spreading propaganda directed to people defined by algorithms and personal data, artificial intelligence can disseminate any information that is required. And that can be done in a format that seems most convincing, whether it is true or not.

Involvement in private life

And now you can even track and analyze every step of a person on the internet.

Cameras are almost everywhere, and face recognition mechanisms allow you to be recognized. These mechanisms contribute to the development of a social credit system in China that gives each citizen a certain amount of points based on his behavior.

Some of the violations that are assessed are such as crossing the road at a red light, smoking in inappropriate places, how much time a person spends playing video games, etc.

–źnalysts are afraid that Big Brother really monitors us and makes decisions based on that data. It is not just an interference with private life but can quickly lead to social oppression.

Inconsistency between our goals and the objectives of the machine

People value artificial intelligence machines for their efficiency in solving certain tasks.

But if we do not set clear targets, for example with the autopilot vehicle, then it can be dangerous because the car will have different purposes than us.

For example, the command “take me to the airport as soon as possible” can lead to a serious crash due to high speed. Unless we program in advance that road rules must be respected in any case, the vehicle can execute the request literally and take us to the airport as quickly as possible, but leaving many sacrifices on its way. Or the autopilot not to choose correctly whom to save – a child pedestrian or the car itself.

Discrimination

Since machines can collect, track, and analyze all data for us, it is likely that these machines can use the same data against us.

It is easy to imagine that the insurance company will refuse you, based on the data on how many times you are being caught by a camera when driving and talking on a phone. Or potential employer can refuse to give you a job based on your “social rating”.

Leave a Reply

Your email address will not be published. Required fields are marked *