Tech News

How will AI protect Russians from fakes?

It is expected that in the next few years, AI-based systems will be integrated into social networks to track content in real time. But for now, you should not completely trust neural networks to check the facts.

Experts from Roskomnadzor, the Main Radio Frequency Center (GRFC), the MINDSMITH Analytical Center and Rostelecom conducted a study “AI Tools in the Hands of Malefactors – Threat Classification and Countermeasures”. They identified 12 groups of the most important areas for the development of safer Internet technologies.

These included deepfake detection, determining the context of what is happening on the video, automating monitoring and moderation of content, face recognition, extracting meaning from text, support for fact checking, symbology recognition, metadata extraction and analysis, emotion recognition, decision support for information attacks, generation and content recommendation.

One of the most important topics in our time has become the fight against disinformation. According to analysts, information systems are already approaching real-time fact-checking and will be subsequently integrated into social networks. For example, in the future they will be able to analyze speech and streaming video, check them and, if false news is found, notify users of inaccurate content. Such systems will include a complex of neural networks, and it will be possible to implement them in social networks within 3-5 years.

“Fact-checking has become especially relevant during the coronavirus pandemic and remains important as we fight against the flood of fakes and misinformation that is raining down on our users. This area is already actively developing: for example, in the second half of 2023, Roskomnadzor should launch the Vepr system, which will be able to identify potential threats on the Internet. Experts will analyze the incoming information and predict the subsequent spread of destructive materials. In addition, another automated system, Oculus, is already in operation. It recognizes images and symbols, illegal scenes and actions, and analyzes text in photos and videos. Thanks to the system, it is possible to identify extremist topics, calls for mass illegal events or suicide, pro-drug content, LGBT propaganda, while the system spends only three seconds per image, which greatly speeds up the detection of threats on the network. The creation of such systems is a full-fledged response to provocations and anti-Russian actions on the part of foreign resources, which daily multiply fakes and materials on all types of prohibited information online. The prospect of developing such technologies and introducing the capabilities of artificial intelligence into this work gives hope that in the future the problem with fakes and illegal content will be finally solved,” said State Duma deputy Anton Nemkin. The creation of such systems is a full-fledged response to provocations and anti-Russian actions on the part of foreign resources, which daily multiply fakes and materials on all types of prohibited information online. The prospect of developing such technologies and introducing the capabilities of artificial intelligence into this work gives hope that in the future the problem with fakes and illegal content will be finally solved,” said State Duma deputy Anton Nemkin. The creation of such systems is a full-fledged response to provocations and anti-Russian actions on the part of foreign resources, which daily multiply fakes and materials on all types of prohibited information online. The prospect of developing such technologies and introducing the capabilities of artificial intelligence into this work gives hope that in the future the problem with fakes and illegal content will be finally solved,” said State Duma deputy Anton Nemkin.

TelecomDaily CEO Denis Kuskov also emphasized the importance of developing legal regulations for AI-based systems that will filter content on the network. He, in addition, noted that the current level of development of AI in Russia will allow the full use of such systems to begin in the next five years. “We already have a lot of similar systems – for example, in Sberbank, Yandex, Roskomnadzor. However, the amount of audiovisual content on the Internet is growing very rapidly. There is already a lot of false information out there, and given the existing technological capabilities to replace voice or image, this will create even more problems in terms of legislation. I believe that it is imperative to work in this direction, and the integration of AI technologies in social networks should be one of the steps in this direction. It certainly won’t lead to collapse. because approaches will be developed that will allow content moderation in a matter of minutes. In any case, at the initial stage – a year or more – it is impossible to allow the use of an exclusively automatic system for work. The work of analysts will also be necessary,” he concluded.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button