Tech News

Artificial intelligence ensures cyber security

Every year, several billion information leaks are registered in the world. They bring huge losses and reputational damage to companies that allow it. In most cases, these leaks are caused by the actions of criminals. This is one of the types of cyber threats.

Other well-known sources of cyber threats are malware (computer viruses), DDoS attacks carried out to disrupt the operation of important components of the infrastructure of a service, or phishing attacks, the purpose of which is to trick the user into getting confidential information, for example, by sending emails to victims.

Moreover, attacks in cyberspace are becoming more complex and sophisticated every year. This is due not only to the development of familiar attack methods, but also to the introduction of new technologies in various areas of human life. Therefore, sometimes it turns out that cybersecurity services do not have time to cope with all threats.

How AI Tracks Cyber ​​Threats

Artificial intelligence has become one of the most important technologies, the emergence of which has already changed and will further change many areas of human activity. Machine learning models have been able to solve many non-trivial problems that cannot be solved using a sequence of strictly defined instructions. Due to the fact that the model observes a significant amount of data during training, or, they say, “precedents”, it is able to make generalizations and identify complex relationships between various features of the input data.

Thus, cybersecurity specialists train several models on a large set of data, each of which is capable of solving its own narrow task, for example, filtering spam emails, analyzing network traffic or user behavior. And all this is based on complex relationships that the model has learned from historical data.

Artificial intelligence is being actively implemented by cybersecurity specialists to automatically detect major threats such as spoofing, phishing, and others. With the help of machine learning, it was possible to achieve a significant reduction in the number of false positives of such systems, while maintaining the highest level of detection. This has been of great benefit due to the increase in the amount of data used and the complexity of the attack scheme.

Another example of the use of AI in cybersecurity is behavioral analysis, that is, the analysis of various information about a user or employee, for example, his geographic location, the time of performing some action, device identifiers, and many others, in order to detect anomalies in his behavior and block suspicious actions.

An important advantage of using AI in cybersecurity is its ability to predict attacks even before they are fully implemented, which will help to strengthen defenses in time. Another advantage is the reduction of the human factor – artificial intelligence is not subject to various psychological influences or fatigue.

If we talk about conventional security, then video surveillance using AI is being introduced quite widely. Such systems are able to detect many objects on video in real time, recognize faces and warn security services about illegal actions.

Prospects for the use of AI in the field of security

The use of AI in security has a downside, as the new technology is used by both attackers and cybersecurity professionals. For example, there are quite a few cases where hackers have used artificial intelligence to control large-scale botnets used for bulk spamming, DDoS attacks, and other well-known cyberattacks. In this case, artificial intelligence increases the speed of the attack, allows it to be more complex and adaptive to possible defenses.

In addition, neural networks themselves can be attacked by an attacker. There is a whole area of ​​research devoted to the issue of stability and security of neural networks. Back in 2013, it was shown that an AI model, with a small malicious change in input data, can produce a completely incorrect result of its work. Such slightly modified data, on which neural networks make mistakes, are called “adversarial examples”. Adversarial examples can be created for almost any data that neural networks work with: for images, text, audio. Cybersecurity examples can also be given: an attacker can attack an email spam detector or a malware classifier. Since these systems usually contain AI models, deceiving them will result in skipping a spam email or a virus.

With the development of research in the field of AI security, it has been shown that an attacker does not always need to know the entire model that he wants to attack. Attacks in the black box mode have been proposed, that is, those that require either only observation of the operation of the neural network, or approximate knowledge of the data on which the attacked model was trained. Thus, the reality of adversarial attacks was shown even if the attacker does not have full access to the model.

Other attacks on AI systems have also been discovered, among which poisoning attacks are worth mentioning. The essence of these attacks lies in a slight and often imperceptible to a person change in the training data in order to form the properties desired by the attacker by the trained model. This implies the incorrect operation of the model on a certain subset of the input data. Due to the fact that AI is trained on a huge amount of data that cannot be manually verified, and many of them can come from unverified sources, this type of attack also becomes a very real threat in the field of cybersecurity.

To date, there are no reliable and universal methods of protection against attacks on artificial intelligence systems. Therefore, any use of AI in cybersecurity, although it brings great benefits, is also fraught with new threats.

AI rivalry

All of the above does not give an unambiguous answer whether artificial intelligence will help to cope with cybercrime. Everything will depend on whether cybersecurity specialists can create AI better than the AI ​​of attackers. Because, as was said, artificial intelligence itself is subject to certain types of attacks, and it is very important to solve the problem of creating a stable and safe AI.

Moreover, AI, as a new technology, has led to the emergence of new cyber threats. For example, with the help of neural networks, it became possible to synthesize high-quality images, video, audio or other information created with the aim of misleading a person or some kind of recognition system. This technique is called deepfake and has already been successfully used for fraud and other illegal activities. For example, a case was recorded when the manager of a company received a call from a person who spoke in the voice of the company’s CEO and asked for a transfer of 220,000 euros. And as a result, the money was transferred to the scammer. Therefore, it can be said that the task of protecting against threats is not getting easier, because new threats are constantly emerging due to new technologies.

Evgeny Ilyushin,teacher at the master’s program “Artificial Intelligence in Cybersecurity”, developed with the support of the non-profit foundation for the development of science and education “Intellect”, an employee of the Department of Information Security of the VMK of Moscow State University named after M.V. Lomonosov

Published on 14.02.2023

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button