Fire Emblem GBA style portrait generator using StyleGAN2

Took most of this from my github repo:

Seeing GANs create fake faces from real ones made me wonder could I create fake GBA fire emblem faces from the real ones.

Generated Fakes Example

Generated Fakes training (from 0th kimg to 480th kimg) video on YouTube:


How do I use this?

  1. Have a Google account.
  2. Click on this link and follow the instructions.
    Alternatively, you could do it the long way and click on the file Demo_FE_GBA_Portraits.ipynb here on on the github repo and then press the button Open in Colab when it shows up. If the steps are slightly confusing, check out this tutorial video. (it’s sped up to 150%).

Why is it on Google Colab?

StyleGAN2 requires CUDA enabled CUDA and I don’t have that. Plus, CUDA GPU hosting is costly ($0.9/hr on AWS AFAIK) so I’d rather have it on a free service like Colab since it works 24/7 for free unless you overuse/abuse it.

Does anything get downloaded on my computer?

No, unless you save the images. All operations happen on the virtual machine offered by Google Colab.

How many different images can it generate?

Well, theoretically, there’s 5 models and each model can generate 2^32 images, which is 5*2^32 which is 21,474,836,480. Practically there is no way of knowing this because a lot of images are similar or just too bad (helmet heads or double facing heads).

Just because how StyleGAN/StyleGAN2 works, the input and output images have to be squares with height and width in power of 2 (think 32x32, 64x64). Since portraits were 96x80, I resized them to 124x124. Hence, the output image will be of the size 128x128 so you may have to crop and resize them down. There is no guarantee that the colors will fall in the GBA range too, but many of the images can be converted to be suitable for ROM hacks with a simple resize and custom indexing of colors to 16. So yes, there needs to be some post processing involved but the generated images are in the same style as FEGBA portraits.

Thanks to spriters for providing the training data:

  1. Aegius (Zach) who gave ~30 portraits from his project (Necrosis Among the Living).
  2. Fire Emblem Mugs by Atey who has posted the linked portraits for use freely.
  3. Servants from FateGO in the GBA portrait style, Collection of Fire Emblem Fates GBA Mugs, Nohr in GBA Style by u/Toaomr on Reddit and Twitter : @toaomr
  4. Fire Emblem Custom mug GBA spread sheet by caringcarrot
  5. Free to use NICKT collection by NICKT


This project was purely made for educational purposes/research purposes and the code base is strictly non-commercial as it is licensed under Nvidia Source Code License-NC because of usage of StyleGAN2. You are free to use (credit appreciated but not required) this tool as long as you use it reponsibly and non-commerically but we are not responsible for any uses (read the license for more details). Do not ask us for the training dataset. We do not have the permission to redistribute the artists’ works.


  1. Steam StyleGAN2 by woctezuma
  2. The original StyleGAN2 repository
  3. @woctezuma’s fork of StyleGAN2 for easy saving results in google drive.

StyleGAN2 citation:

  title   = {Analyzing and Improving the Image Quality of {StyleGAN}},
  author  = {Tero Karras and Samuli Laine and Miika Aittala and Janne Hellsten and Jaakko Lehtinen and Timo Aila},
  journal = {CoRR},
  volume  = {abs/1912.04958},
  year    = {2019},

Most of these don’t look good, and I’m being polite


Yep, most of them are not good but I’d say quite a few are usable. It’s basically a lot of quantity but low quality so the average image tends to be quite bad. Anyway, it wasn’t supposed to work without any post processing but I’m gonna try to make them better and work out of the box eventually. A lot of it can be improved by simply having more training images so I’m going to ask around to get more whenever I get some time.


Out of curiosity, how does it reward results?

1 Like

I mean those look a bit weird, but maybe for people that are not good at creating portraits it could be easier to just take one of those and fix it instead of creating a complete new one. But I have no idea.

still better than mug generator, 10/10


This is really cool. While artists will always still be needed to touch up images and aid in special design, I think this sort of thing is going to become more and more popular in coming years. Neural network generated images can help save the artists time with their designing process, whether that’s with ideas for design, editing generated ones, or whatnot. Indie development in particular will benefit from this eventual trajectory of the industry.

I don’t really understand the finer details of NNs, but I read this related article recently:
Maybe somebody else will find it interesting too.

Thanks for making this. Here are a few of my results with trying it out.

More examples

download download (11) download (10) download (2) download (1) download (12) download (4) download (3) download (6) download (5) download (8) download (7) download (9)

Some of them would make for fantastic submissions to a horror FE contest, haha. While they won’t be putting artists out of work for a while, select results will certainly be useful to some.

I for one welcome our new neural network overlords.





I recommend training the network with the neutrally colored faces.

The standardized sheet might have better results. Probably not, but you might as well try.


When I tried GAN a few years ago, it didn’t work at all.
My code spit out data that was worse than your data.
I think your data is well done.

Besides Portrait, there’s another problem that annoys game creators: support conversations.
It’s the support conversations.
A third of all the text in GBAFE is support conversations.
Some of the support conversations are useful to supplement the story, but I think a lot of them are just chatter.
The problem is that it’s quite difficult to create three different types of chat for each pair of characters, which is a huge amount of chatter.
Clever automatic sentence generation AI like GPT-3 is now available.
I think these technologies could help create supportive conversations.


This is one of the reasons I have no interest in Support conversations. I would rather remove supports and focus on adding Talk events during chapters, some of which give items and stuff. Talk events are more interesting, dynamic, and often relate to that specific chapter’s events. Getting items is just as tangible of a bonus as a support boost, but also doesn’t require you to have units stand next to each other while spamming ‘End Turn’.

This is exactly what Support Rework does lol

1 Like

Did not even know that existed.