Stable diffusion gbafegirl gbafeboy on google colab

I think the ai might be able to recognise specific characters, but I’m not sure

I just put in the character’s name alongside the tags in the prompt, i think it only works if you use the name tag that danbooru uses (hakurei reimu, yorha no. 2 type b, etc)

reimu

sakuya

yuyuko

It doesn’t work for characters that aren’t very popular but I’m sure it should work for most. I’ve just been mainly been trying it on touhou characters but I’m sure that it shld be able to generate characters from popular gacha games (blue archive, genshin, etc)

Also, thanks to 7743 for making this

5 Likes

First, let the AI generate a variety of data.

If it generates something interesting, you can share it online anyway.
Even if you don’t create a portrait, if you share the data, someone may be able to make it work in a game.

First, match the face size of the AI-generated picture with the game character.
This is because if you reduce the color first, you will lose out on the color of the clothing parts.

In vanilla, the WIDTH is 96 dots, but resizing it to about 100 dots should be just fine.
When centering the face, trim off the margins.

Once the face is sized, cut it out to the specified 96x80 size.

Reduce the number of colors to 16.

It may be more efficient to reduce the number of colors to 15 and repaint the background color.
This is because the background color may be used for eyes or clothing.

I got pretty good results when I used padie to reduce the colors.
https://www.vector.co.jp/soft/win95/art/se063024.html

This software is very old, developed in the windows95 era.
The UI can be switched to English.
Default support is bmp only (png can be loaded with an extension plugin).

If you have reduced the number of colors to 15, repaint the background in the 16th palette.

Repaint any imperfections in the black border.
It is easier to do this with a tool that has a bordering function, such as edge.
Dotted dots look better when bordered with black to create a sharp contrast between light and dark areas.

From here, we will create a 128x112 sheet, which is the original size.
We will need a mouth frame and eye frames, but for now, import the sheet into the game and check the size of the frames.
If you make a mistake in the size of the face, you will have to redo the following steps.

First, please check if the size of the face is correct by inserting the current data into the game and having Seth talk to it.

Create a frame for the mouth and eyes.
It would be easier to draw them by referring to existing portrait frames that are similar to vanilla.
(I wish I could create these in AI as well.)

The correct mouth and eye frames can be checked in FEBuilderGBA, although the game can also be used to check if the mouth and eye frames are correct.

When you are satisfied with the product, make a minimug to complete the process.

Please contribute to the comminity while showing off your work by submitting it to REPO.
Better to share than to monopolize!

It is very tedious to match the face size, the 16 color limit, and to frame the eyes and mouth.
It would be much easier if a tool or system could be developed to assist in this area.

Apparently Klokinator is making something better.

AI also excels at painting landscapes.
I am color reducing some BGs so that they can be used in GBAFE.

The AI will also allow you to create nice single-picture cut-in scenes.
However, to seriously stabilize the drawing of the characters, it will be necessary to take some time and effort to fix the characters by learning with LORA.

If you can draw, you can use the AI-generated data as a basis for reworking it to make it better, or you can use it as a reference for composition.
It is good to think of AI as an assistant, not an enemy.

This AI technology has been advancing at several times the rate of dog years since it was released last summer.
New technology is being released to the public every month.
I believe that by using technology well, we can do more interesting things.

9 Likes

Got around to doing one of them up (Obviously the first was the foxgirl). Still need to put more effort into making her actually ‘fit’ a hackbox but if I do that I’ll probably toss it at the repo as well.

Anji
Tossed it into paint, resized to about the size of a hackbox, tossed that into FEBuilder to truncate colour count, then scribbled lines via Paint[.]Net using different colours for generally different sections, just as a self reminder, copied lineart back into paint (Yes, I’m old and stubborn and I will use my archaic MS Paint to sprite), split my screen into the paint window and the original thing for reference and went from there. That was fun.

If you’d just like to use as is then feel free to do so. We’ll have to see when I fix it up for game use.

Anyways! Thanks for doing this! Maybe it’ll make doing characters a bit faster~
Rather excited to see it improve so i can actually make out details properly.

11 Likes

This is a really cool tool, and I’m interested in the implications of what the community can do with it. I toyed around with it for about an hour, testing several things, and got some varying results. I will say of the 3 AI generators I’ve attempted to use, this is super easy to use in comparison to the other two, especially with the directions, so bless you for that.

The less tags you use, the more it preserves the pixel art look, off as it is.


The more specific you get with your tags, the less pixelated it is, which does kind of defeat the purpose of the generator, so that’s something to note. I’m assuming its because its drawing information from more images to get the specific aspects.

For anyone wondering, yes, you technically can make NSFW with this. I only did one batch to test because I was only curious if it could do it in the first place, but it seems it avoids genitals. Bare boobies and fluids are still possible, but that seems to be it thankfully.

Lastly, a question for @7743; is it possible to train the AI on the works of artists from the community, given you have their permission to do so? Giving it more information to work of would help further solidify the pixel style and general aesthetic.

If you increase the value of 0.6 in network_mul, you will get a more GBAFE-like picture.
For example, if you increase the value to 1.0, the correction will be 100%, which is quite strong.
It can be increased to more than 100%, but if the value is increased too much, it will look strange.

So, the current network_mul is 0.6, which means you are mixing 60% GBAFE and 40% Anything3.1 model data.(maybe)
Maybe the inclusion of more tags makes Anything3.1 stronger.
If so, please increase network_mul and see what happens.

3 Likes

is it possible to train the AI on the works of artists from the community, given you have their permission to do so? Giving it more information to work of would help further solidify the pixel style and general aesthetic.

If they have a painting style, they could mix it up.
However, I think that even in its current state, it is diverse enough.

2 Likes

Thank you, I’ll take a look at that tomorrow!

Oh, my bad, I meant specifically from the community’s pixel art. But yeah, I figured that might be the case.

I modified the picture drawn by AI.

However, I think it could use a little more work to make it look better.

2 Likes

Had fun generating a lot of images. I thought that this one was one of the coolest one for me:
image
I think the prompts were “old, beard, king”, but I’m not sure since it was one of the first attempts.
I liked it so I formatted it for GBA, and will submit it to the repository.
image

2 Likes

This picture needs a mugexceed patch because the left arm is outside the hackbox.

1 Like

You are very good at erasing jaggies.
When I reduce the color, I have to turn off the jaggy noise on it, but I’m not good at it yet.

1 Like

i forgot to ping you in my thread so ill repost here
changed the sprite quite a bit from the original output
image

10 Likes

jeez that’s good

mmh…resizing these, reducing colors and fixing blurry pixels one by one could be a valid solution.
need to make eye and mouth frames though, otherwise it’s just pixel art:





i tried doing that, and the amount of blurry/abnormal pixels drastically reduced, but i don’t think the tags make any difference, so that might be just the network_mul setup.
still, the results are pretty good even for being upscaled.

i tried typing mechs, androids and cybernetics out of curiosity, but i guess that kind of stuff isn’t in the base pool of images. well, that’s unfortunate :sweat_smile:

[edit] and i was right, this is most certainly doable.
it just requires a bit of polish here and there. increasing saturation by 25% or more might help as well:

test

i’ve set that up with photoshop to reduce the size by 85% and be fixed at 16 colors. not too shabby :+1:

2 Likes

Getting to 16 colors was a real challenge. I did put “16 colors” into the prompt and that seemed to help a bit as the AI would make the image a bit blockier. Scaling the image in paint put me back several hours until i started using Gimp’s scale function with no interpolation. After about 2 hours of corrections after that, i was able to get a half body. I can’t seem to be able to get portraits small enough to where the would fit a normal portrait. I was able to get this with 1/5 scaling. I’m not sure if additional scaling would make the image blurrier or uglier.
Shantae Halfbody

2 Likes

Previously the collab worked fine, but now whenever I try to set up I get the error “ERROR: xformers-0.0.15.dev0+189828c.d20221207-cp38-cp38-linux_x86_64.whl is not a supported wheel on this platform.”

1 Like

Fixed.

This is a problem that occurred because some idiot at google took the liberty of increasing the python and pip versions.
This caused xformers to fail to install.

However, I think the google idiots are doing a good job, because now pip can install them automatically, whereas before it was necessary to specify a special version.

old
!pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16%2#B814314d.d20230118-cp38-cp38-linux_x86_64.whl
!pip install -q --pre triton


new
!pip install -q --pre xformers
!pip install -q --pre triton

This is the scary part of the cloud service.
It is not a static disk, but is built by creating a VM and downloading the program again each time.
Therefore, the data can suddenly change.

The shape of the cloud changes every day.
Clouds do not always stay in place because they are swept away by the wind.

4 Likes

Hello, I’ve been using the colab for awhile now but recently I keep getting this error during the set-up cell

ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
xformers 0.0.17rc482 requires torch==2.0.0, but you have torch 1.13.1 which is incompatible.
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.14.1+cu116 requires torch==1.13.1, but you have torch 2.0.0 which is incompatible.
torchtext 0.14.1 requires torch==1.13.1, but you have torch 2.0.0 which is incompatible.
torchaudio 0.13.1+cu116 requires torch==1.13.1, but you have torch 2.0.0 which is incompatible.
fastai 2.7.11 requires torch<1.14,>=1.7, but you have torch 2.0.0 which is incompatible.

The pip error is shown, but the picture is generated properly, so I think we can ignore it.
I am not sure how to resolve this error, although it would be nice to be error free.

1 Like

Hi I find this generator really inspiring. I’ve used it a fe times, but now I’m receiving an error I dont know how to fix. It only happened once I changed the network_weights to boy instead of girl. Is there a way to fix this?

FileNotFoundError: No such file or directory (os error 2)
/content/tmp
zip warning: name not matched: *.png

zip error: Nothing to do! (/content/tmp/imall202304280000.zip)

1 Like