Generating Art With StyleGAN

5 min readMay 13, 2021

Make sure you can fit the requirements for this project:

  • A Google Account
  • At least 50GB of Google Drive Space Free ($1+ per month)
  • (RECOMMENDED) A Google Colab Pro plan ($10 a month)
  • (BONUS) A brain cell

Dont know what StyleGAN is? You can see my basic explanation here

What is Google Colab?

If you’re new to the Cloud Computing or AI Scene, you’re in for a treat. Google Colab is a cloud computing online software that you can utilize for free. It will give you server-grade hardware to train your models with, so your potato computer doesn’t need to suffer any more than it does already. Colab Pro, which costs $10 a month will give you longer runtimes, better hardware and space which is helpful for large projects like this.

Creating a custom dataset (OPTIONAL)

So, do you want to do your own dataset? Want to be your own special little snowflake? Fine. Here are some pointers on how to do so.

  • Decide on what kind of photos/art you want
  • You need at least 1000 images to create a decent dataset, so you want to keep your target images not too specific.
  • Create a folder with a fitting name on your PC

Resources to scrape images:

  • bulk Imgur dumps and downloading them using this handy downloader
  • Subreddits
  • Instagram profiles
  • Other datasets on sites like

There is a myriad of ways to scrape images, but find the ones that work with you. With a little bit of hard work and a lot of time, you should have a raw dataset to your liking. Zip your folder and continue to open to colab.

Dataset Creation

Let's get this party started, so open Google Colab

Connecting Google Drive

To save all the files during training, we will connect our Google Drive Account to the session. Click the play button on the left, and connect your google account to copy the code. Once it is connected, we are free to continue to install the libraries.

Installing Libraries

Next up is installing all of our code, libraries, and configurations to train our model. If this is your first time installing, it will create a folder and copy the GitHub repository into your google drive, if it already exists it will simply install the libraries necessary.

Processing Dataset

Custom Dataset

Remember that zip file you made? Well, you need to upload it. Uploading directly from Google Colab can be painfully slow, so I recommend uploading to the google drive website directly and then copying the path for the next step.

Next, you will preprocess the dataset. This is vital for the AI to read your images. Depending on the size of your dataset, this could take a few minutes, so I’d suggest looking at the training parameter explanation down below while you wait. Once this is done, we can continue to training.

Preprocessed Dataset

Don’t want to go through all the pain and monotony of creating your own dataset or feel overwhelmed by all the steps? No worries! I’ve made some preprocessed datasets that can be used to train new models. This is a great idea if this is your first time and want to understand the basic happenings before doing everything yourself.

Right below “Custom Dataset” there is a dropdown menu for the selection of datasets you can download. Feel free to look up examples of the different genres. Once you found the one you like, press the play button and it will download and unzip for you.


Dataset preprocessed, Colab setup, you’re almost there! Well, not really because the training process will take a few days but we can pretend everything is just fine :)

This is a basic rundown of the parameters that are used for the training process so we can tweak them if deemed necessary

You can change this to your liking, and can find more by using the –-help or referring to the original github

  • dataset_path: This is for custom dataset users only. Copy the path of the zip (can be done with the file explorer to the right) and enter it in the quotations.
  • snapshot_count: The number of times a sample is generated from the training. The fewer snapshots, the more samples produced but the more storage is taken up, and vice versa. For snapshots, id keep it to 1–8.
  • Metric_List: This is used to determine the quality of the samples, and at the cost to speed can improve the quality of the final output. I tend to keep this too “none” especially the first time running a new dataset to quicken things up, but the GitHub repository goes into depth of the options available
  • Augs: ADA is a new way of training, which seriously accelerates the training process and quality of output. I would recommend keeping this variable the same.
  • Augpipes: Enables all available augmentations. This can be helpful when you want to generate media from your model.

Now we have that out of the way, you can start. Assuming you set everything up correctly, it will begin to train. You can check its progress visually under the results folder. This will at least take a continuous day of training. If you get kicked out of your session, you can reload your dataset and resume_from your last .pkl file. (now you see why we want that google drive connected)

When you feel like it’s done, you can stop the training. Your last pkl file from your results folder will be the file you want to download to use for other projects.

Generating Media

In the Colab I have included some basic applications for your newly-trained model.

Generating Images

Pretty self-explanatory, it will generate PNG files of art in 1024×1024 resolution. First link your .pkl file mentioned previously, and then choose how many images you want to be generated. If you want 100 images, for instance, you could generate seeds 101–200. Click play and it will generate in the out folder on google drive.

If you want to go above and beyond, I would recommend upscaling your images to a higher detail and resolution. I use the paid Gigapixel AI, but there are other open-source software like the unfortunately named waifu2x that will generate similar results

Zoomed in example of upscaling 6x on Gigpixel AI
6x upscale full image, Gigapixel AI

How to continue

Well, pretentious author. I did everything here yet there is still a hole in my heart that I have yet to fill. Good news! There is much, much more to do! here I’ll link some fun projects that I couldn’t include, but are well worth trying:

Audio-reactive Latent Interpolations

Interpolations, Style Mixing, Truncation




A high school senior trying to make the AI process at least 10% less homicidal