and I am so, so sorry.

I’ll understand if you don’t want to catch ‘em, all. (Image by Author)

Growing up in the nineties and the noughties, I loved Pokemon. Pokemon Red was my first and favourite game, and I easily spent hundreds of hours playing it as a kid despite being too dumb to actually beat it. When I wasn’t trying to catch ’em all, I would make up my own little monsters and draw them with all the artistic talent a five year old could muster. Despite this obsession, I had a few qualms with the games. Like, how does one Pokemon suddenly evolve into another; shouldn’t it be a gradual process as they get stronger? And…

How a Statistician Reacts to Life on Venus

Photo by Philippe Donn from Pexels

If you’re the sort of person who reads data science blogs (well hello there!) then you probably already know about Bayes’ Law. Heck, you’ve probably even used it yourself. Unfortunately, in my experience, many people only know Bayes’ Law in a technical, academic context, and don’t actually understand it in a way that’s useful in daily life. And that’s a shame, because Bayes’ Law is a beautiful, powerful, rational way of organizing your beliefs. Here’s the thing, though: you already use it in daily life, you just might not know it. …

A Jump-Start GAN Tutorial

Image Source: Pixabay

In this tutorial, we’ll be building a simple DCGAN in PyTorch and training it to generate handwritten digits. As part of this tutorial we’ll be discussing the PyTorch DataLoader and how to use it to feed real image data into a PyTorch neural network for training. PyTorch is the focus of this tutorial, so I’ll be assuming you’re familiar with how GANs work.


  1. Python 3.7 or higher. Any lower and you’ll have to refactor the f-strings.
  2. PyTorch 1.5 Not sure how to install it? This might help.
  3. Matplotlib 3.1 or higher
  4. Twenty-two minutes or so of your time.

It’s not…

Building the Simplest of GANs in PyTorch

Image Source: Pixabay

I spent a long time making GANs in TensorFlow/Keras. Too long, honestly, because change is hard. It took some convincing, but I eventually bit the bullet and swapped over to PyTorch. Unfortunately, most of the PyTorch GAN tutorials I’ve come across were overly-complex, focused more on GAN theory than application, or oddly unpythonic. To remedy this, I wrote this micro tutorial for making a vanilla GAN in PyTorch, with emphasis on the PyTorch. The code itself is available here (note that the github code and the gists in this tutorial differ slightly). …

How the AEGAN architecture stabilizes GAN training and prevents mode collapse

AEGAN, a two-way street (Image source: Pixabay)

GANs are hard to train. When they work, they work wonders, but anyone who’s tried to train one themselves knows they’re damn finicky bastards. Two of the most common problems in GAN training are mode collapse and lack of convergence. In mode collapse, the generator learns to only generate a handful of samples; in generating “handwritten” digits, a GAN undergoing mode collapse might only learn to draw sevens, albeit highly-realistic sevens. …

and other dead-giveaways that you’re a fake data scientist

Like most news outlets writing about “hackers”, fake data scientists happily use any random code they find online (image source: Picography)

These days it seems like everyone and their dog are marketing themselves as data scientists—and you can hardly blame them, with “data scientist” being declared the Sexiest Job of the Century and carrying the salary to boot. Still, blame them we will, since many of these posers grift their way from company to company despite having little or no practical experience and even less of a theoretical foundation. In my experiences interviewing and collaborating with current and prospective data scientists, I’ve found a handful of tells that separate the posers from the genuine articles. I don’t mean to belittle self-taught…

Visualizing how GANs learn in low-dimensional latent spaces

Image Source: Pexels

Generative Adversarial Networks (GANs) are a tool for generating new, “fake” samples given a set of old, “real” samples. These samples can be practically anything: hand-drawn digits, photographs of faces, expressionist paintings, you name it. To do this, GANs learn the underlying distribution behind the original dataset. Throughout training, the generator approximates this distribution while the discriminator tells it what it got wrong, and the two alternatingly improve through an arms race. In order to draw random samples from the distribution, the generator is given random noise as input. But, have you ever wondered why GANs need random input? …

How GANs tie themselves in knots and why that impairs both training and quality

Image source: Wikimedia Commons

A warning to mobile users: this article has some chunky gifs in it.

Generative Adversarial Networks (GANs) are being hailed as the Next Big Thing™️ in generative art, and with good reason. New technology has always been a driving factor in art — from the invention of paints to the camera to Photoshop — and GANs are a natural next step. For instance, consider the following images, published in a 2017 paper by Elgammal et al.

Visualizing the Very Basics of Generative Adversarial Networks

In the original Generative Adversarial Network paper, Ian Goodfellow describes a simple GAN which, when trained, is able to generate samples indiscernible from those sampled from the normal distribution. This process is illustrated here:

Figure 1: the first figure ever published in a GAN paper, illustrating a GAN learning to map uniform noise to the normal distribution. Black dots are the real data points, the green curve is the distribution generated by the GAN, and the blue curve is the discriminator’s confidence that a sample in that region is real. Here x denotes the sample space and z the latent space. (Source: Goodfellow 2014)

The simplest solution for this task is for the GAN to approximate the inverse CDF of the normal distribution, x=Φ⁻¹(z). This is an intuitive and appealing view of GANs: they are tools for randomly sampling from an unknown distribution given a set of samples. In the above example, this is the normal distribution. Others include the distribution of all handwritten digits and the distribution of all…

Real-time visualizations of GAN learning and mode collapse

Illustration by Sian Molloy

A warning to mobile users: this article has some chunky gifs in it.

A Forger and a Detective Walk into a Bar

Unless you’ve been living under a rock these past few years, you’ve doubtless heard the fanfare surrounding generative adversarial networks (GANs). In particular, their ability to create new, photo-realistic images is astounding. Consider the following images:

Conor Lazarou

Data science and ML consultant, generative artist, writer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store