GANs and Snoozes: On Amazing Parallels between GANs and Brains

Hooman Shayani
4 min readFeb 26, 2018

Lessons we can learn from symptoms of sleep deprivation and unbalanced training of a Generative Adversarial Network

I am always amazed, and at the same time inspired, by analogies between technological systems and their natural counterparts. To the extent that I start thinking if it is only my imagination or there is actually so many parallels between these two systems. The similarities between brains and neural networks extend even to the symptoms they show when they have pathological problems. In this case we can learn a lot from the similarities between sleep-deprived human brains and ill-trained GANs (Generative Adversarial Networks).

Apart from artistic applications to generate new works of art using a dataset of examples, GANs can play an important role in artificial intelligence. GANs are able to use unlabelled data as well as labelled data in a semi-supervised manner. For example, a GAN can learn from all the unlabelled pictures (which is pretty cheap these days, compared to labelled pictures) to learn how those pictures generally look like and what are the usual features in the pictures (such as horizontal, vertical, and diagonal edges, etc.) and then learn to distinguish pictures of certain objects using a small dataset of labelled pictures of those objects.

This is due to ingenious and special structure of GANs, which consists of a generator network and a discriminator network. Generator network is used to generate new data similar to what is already available, and discriminator is used to distinguish between real data and the data forged by the generator. This would create an arms race between these two networks to the point that both networks will become pretty good at their own craft. Up to this point, all has been done without GAN looking at a single labelled data. But both networks have been able to construct structures about features and concepts only from locality of phenomena in time and space. Spatial and temporal locality of phenomena means that certain things usually happen together, close to each other (in space or time or both). These patterns in time and space are what allow our brains and neural networks such as GANs to predict what might happen next or what the other half of that car we can’t quite see would look like. Neural networks and our brain both capture these patterns at different levels of abstraction at different layers of their network. Recently Deep Neural Networks were able to capture much more abstract concepts such as “catness” or “dogness” of a patch of image. What happens in unsupervise learning of GANs is that some of these abstract concepts form in the “head” of the neural network without assigning any labels to them. Then when labelled data is available, a few datapoints (labelled pictures) would be sufficient to label these consolidated concepts and their combinations.

Now, starts the interesting parallel between GANs and the brain. When unsupervised, a GAN periodically switches between two modes: first, seeing real data while the discriminator in the GAN is learning to appreciate it as real data; and second, generating fake data while the generator learns how to better fool the discriminator and discriminator learns to detect this forgery. When unsupervised, the labels for discriminator are “real” and “fake”, which produce the gradient for it. Labels for generator are “discriminator fooled”, and “not fooled” that produce the error gradients, which backpropagates through generator network and trains it. Essentially, when performing unsupervised learning, the GAN works in two main modes of seeing the real world, and dreaming an imagination generated by itself.

These two modes of operation are very similar to awake-asleep cycles in animals. At least what we as humans experience during sleep is a world amazingly realistic in many aspects but unrealistic in a few others that are comically off and illogical. Studies have shown that both the levels of certain hormones and the patterns of brainwaves are significantly different in these two states. As if a sleeping brain is generating imaginary sensory data (and possibly higher-level abstract data), learning to distinguish them from reality of wakefulness. By going through these cycles the brain prepares a discriminator with all the conceptual features needed to detect patterns and learn from minimum number of labels, instructions, supervision, and experiments. But also, it creates a generator that is a sufficiently good model of reality that allows brain to predict what may happen next (both next in time and next in space).

But what happens if we don’t give enough chance to a GAN to dream? Its performance drops significantly. First, the number of fake data points will decrease compared to the real data points, which creates an unbalanced dataset. This makes discriminator to see more real data than fake, which forces it to simply bet on “real” as its statistical experience shows that most of the data is real. What happens to the generator is even more interesting and dramatic. The generator misses most of its chances to be trained and would almost completely lose its ability to model the reality. These two deficiencies makes a GAN to lose its grip on reality by not being able to tell things apart with the discriminator, and not being able to correctly predict the future or rest of the sensory data. It sees a cute cat but thinks it is an angry dog. Or thinks that if someone keeps walking on the edge of a cliff, one would fly smoothly into the sky. Sounds familiar? These symptoms are known in psychology as hallucination, impaired cognition, and psychosis. All of them are well-known symptoms of long-term sleep deprivation.

Looking at these similarities, analogies, and parallels not only help us understand both biological brains and current artificial systems better, but also directs us toward ways to create better artificial intelligent systems in future. But, more than anything, it tells me that somehow, we are on the right path.

--

--

Hooman Shayani
Hooman Shayani

Written by Hooman Shayani

I'm an AI Research Scientist with a passion for Generative and Creative AI and a better understanding of the Reality.

No responses yet