As an Amazon Associate I earn from qualifying purchases.

Growing generative adversarial networks, layer by layer

[ad_1]

Generative adversarial networks (GANs) can produce remarkably realistic synthetic images. During training, a GAN pits a generator, which produces the image, against a discriminator, which tries to distinguish between real and synthetic images. The “arms race” between the two can yield a very convincing generator.

The generation of high-resolution, sharp, and diverse images demands large networks. However, if the network is too big, adversarial training can fail to converge on a good generator. Researchers address this problem by starting with a small generator and a correspondingly small discriminator and gradually adding more and more neural-network layers to both, ensuring that the generator maintains a baseline level of performance as it grows in complexity.

In the past, this approach has been deterministic: a fixed number of layers, of fixed size and predetermined type, are added on a fixed schedule. In a paper my colleagues and I presented at the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI), we explore a more organic way of growing a GAN, computing the size, number, and type of the added layers incrementally, on the fly, based on performance during training.

A comparison of bedroom interior images created by our model (top) and an earlier progressively grown GAN (bottom).

The graphic above compares images produced by our method to those produced by an earlier progressively grown GAN. We also use standard metrics to evaluate our model’s output, the sliced Wasserstein distance and the Fréchet inception distance. Both measure the difference between two probability distributions — in this case, the distributions of visual features for real and synthetic images. Better distribution matching means both higher sample fidelity and greater diversity. 

We compared our model to several other GANs, including other progressively grown GANs, on several different data sets, and found that, with one exception, ours had lower distance scores on both measures. The one exception was a “part-based” GAN, which uses a fundamentally different approach, separately synthesizing segments of an image and then stitching them together. But in principle, that approach could be used in conjunction with ours.

Breaking symmetry

One distinguishing feature of our approach is that it is not constrained to symmetric architectures. With previous progressively grown GANs, the generator and discriminator grow in lockstep and end up with the same number of layers. With our approach, the number of layers in the generator and discriminator is optimized separately, and the two networks can have significantly different architectures. 

Our method’s dynamic growing process turns out to allow faster generator growth, with guidance from a moderate discriminator; the discriminator catches up later to provide stronger critics, helping the generator mature. This is consistent with recent research on the training dynamics of neural networks, showing that “memorization” phases are followed by “consolidation” phases.

Our approach alternates between training the existing GAN and adding new layers. During each growth stage, our algorithm has the option of adding to the generator, adding to the discriminator, or both. 

If a layer added to the top of the generator is larger than the layers below it, then a layer of the same size must be added to the bottom of the discriminator, as the outputs of the generator must have the same size as the inputs to the discriminator. Such additions increase the resolution of the images the generator produces.

Our protocol for alternating between growth stages and training stages in growable GANs. The generator (G) and discriminator (D) may grow asymmetrically, resulting in models of different sizes. Some growth stages increase image resolution by adding new, larger network layers to the top of the generator stack and the bottom of the discriminator stack.

Credit: Glynis Condon

When a new layer — with randomly initialized weights — is added to either network, the weights of existing layers are inherited. Future training may adjust the carried-over weights, however.

Like most AI applications that deal with images, our image discriminator uses a convolutional neural network. In a typical computer vision application, a convolutional neural network steps through an input image in fixed-size chunks — say, three-pixel-by-three-pixel squares — and applies the same bank of image filters to each chunk. The next layer of the network applies a similar bank of filters to each of the first layer’s outputs, and so on. The output of the network is a vector that characterizes the input image in some way — say, identifying objects.

An image generator does the same thing in reverse, beginning with a high-level specification and outputting an image. But the principle of convolution is the same.

In our approach, when our algorithm adds a layer to either the generator or discriminator in our GAN, it has to determine not only the size of the layer but also the scale of the convolutions — how big the filters are and how much they should overlap.

Moreover, the optimal sizes of a layer and its filters depend not just on the inputs and outputs of that layer but also on the inputs and outputs of all the layers that succeed it. Canvassing all the possibilities of layer and filter size for both the layer to be added and all its successors is computationally intractable. So instead, our algorithm considers the best models recorded in the search history and computes all the possible next layers to add to those. 

This random sampling is not guaranteed to converge on the global optimum for layer and filter size. But like most deep-learning optimization, it leads to a good-enough local optimum. And it gives the growable GAN much more flexibility than fixing the architectural parameters in advance does.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo