Science

HSE University Forced Image Generator Neural Networks to Learn More Efficiently

The efficiency of training the StyleGAN2 neural network for generating images was improved by four orders of magnitude. This was reported by the press service of the National Research University Higher School of Economics .

Modern neural networks are able to generate fake images that are almost indistinguishable from real ones. In particular, it is possible to generate faces of people who have never lived. One of the most successful types of neural networks for these tasks are generative adversarial networks – in which one algorithm generates an image, and another tries to distinguish it from the real one. As a result, by a gradual enumeration, the picture takes on such a form that the differences become minimal. At the same time, a significant difficulty in the operation of such a scheme is the need to collect a large number of high-quality images for training. For example, the correct generation of random faces requires a database of at least 100,000 real photographs. However, there are ways to partially circumvent this limitation: for example,

Experts from the HSE Center for Deep Learning and Bayesian Methods have described a new approach to retraining the StyleGAN2 generative model. This is a generative neural network that converts random noise into a realistic picture. The researchers managed to optimize its training by reducing the number of trained parameters (weights) by four orders of magnitude by training an additional domain vector.

The architecture of the StyleGAN2 network has special transformations (modulations), with the help of which the input random vector controls the semantic features of the output image, such as gender, age, etc. The scientists proposed to train an additional vector that defines the output image domain through similar modulations.

“If we additionally train only such a domain vector, then the domain of the generated images changes just as well as if we retrained all the parameters of the neural network. This drastically reduces the number of optimized parameters, since the dimension of such a domain vector is only 6000, which is orders of magnitude less than 30 million weights of our generator,” explained Aibek Alanov, one of the creators of the new algorithm.

As a result, the authors hope that their invention will significantly speed up the training of generative neural networks and simplify their operation.

Previously, Russian scientists have created a coating to protect against the destruction of implants based on magnesium.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button