AI GBA Portrait Discussion

Discussion of AI aside, I’d probably use something like this as a base and then make my own edits to fix mouths/eyes/ears/ or whatever else that isn’t how I want it. I’m not an artist so this gives someone like me a good alternative to vanilla sprites, splices, or using someone else’s work.

EDIT: For example, with the elf girl in the final example i’d adjust her mouth (the gap between her lower lip and bottom of her chin is too big), enunciate the bridge of her nose slightly better (the region between her nose and her left eye (the one on our right) looks odd), and fix her ears to curve up slightly and look less like something i’d rather not say.

I think it would be best for those who can draw to let the AI do the preliminary drawings and let the humans take care of the finishing touches or use the AI like an assistant.
I was given the following link in October 2022.

In this demo, the AI is used to complement the picture to complete the new work.

StableDiffusion is a new technology that was released in August of 2022, and I think it is a great technology with many possibilities.
I don’t see why we should be conservative because this is a disruptive innovation and we should be afraid of it, but I think we can do many more interesting things with this technology.

I think it’s like meeting electricity(lighting) in the age of lamps.
I think the people who sold lamps back then were surprised and terrified when they met electricity (lighting).
However, they would not have been able to stop the spread of electricity where it was inhibited.
On the contrary, if we had relied on lamps and bonfires, civilization would not have progressed and computers and the Internet would not have been born.
I think we should keep on adopting things that are convenient.

By the way, GPT-3 is a text-generating AI, if it is text, can it generate EA Script from natural language.
GPT-3 has been proven to be able to create program code fragments and simple tools from language.
Perhaps he learned by crawling github.
If he can understand the EA syntax, he might be able to create an EA from natural language.
“In turn 3 to 4, have the heroes and swordmasters appear north of the village with reinforcements.”


“AI art” as its current trend seems odd to me. I’m just dumping my own perspective, thoughts, and concerns.


As far as I understand “AI art” generates 2D art based on what it’s learned through processing a combination of algorithms and datasets of raster images that’ve been manually tagged by a human. I’ve played around with a few and the output seems fairly limited to (me) what’s been popular or digitized into images in the last decade or so.

I’d have to learn to use the machine, it will not provide me anything outside its dataset, and it will not learn anything ‘new’ or adapt without an update or my own input.

It’s curious to know who or how the human-machine interface develops, as I think the most recent ones have taken advantage of crowd-sourcing (not necessarily with the consent of the participants nor acknowledgement of distribution licenses behind the work). Text prompts feel like they will eventually be inadequate for desired complexity / simplicity balance. I’d rather see a tool that provides a human with an understanding of the logic behind a thing or facilitate generating a logical representation of said thing, than an algorithm that provides a learning engine that (unless it’s a task that has zero benefit to human involvement), but that is personal taste.

I think in the case of learning machines I’m not convinced yet that there are any mechanisms barring a user from narrowing output into utilizing a decompressed reconstruction of an item in the learned dataset. I do, however, think that the user still has liability over their production and the claim that AI sufficiently math-washes/transforms source data to be irrevelant is tricky at best. A human artist may be influenced by an innumerable amount of works but they are usually also aware of what parts are considered open-domain and what parts/quantities are legally protected for any given memory of a work.

Personally I don’t like the idea of my work being harvested or reinterpreted by a machine (or person) that has little regard for my well-being and what I feel I own or should be owed for manipulating a work. I can’t really change that, though in response to the environment I’m now more conservative when presenting art I want to protect. I wouldn’t be surprised that if AI art continues to thrive in its current form, that digital art creators continue to gravitate more towards pay-walls, watermarks, new licensing agreements, bot countermeasures, shifting away from pure digital, etc.

Anyways, again just my thoughts.


Cautiously upvotes after only understanding about 40% of what was said

In the StableDiffusion example @7743 gave, the issue is the end result is an entirely illogical picture. and not just the center part either, but all over the picture are things which make no sense. I understand the point was to show how quickly working with AI could create something that would take an artist hours, but after the first couple (1-3) AI picture segments, a real artist needs to step in and create the piece using that as a springboard. Trying to create a full picture like that is like clicking the “randomize” button on a character creator and hoping to get pieces which fit with each other. You may lock certain pieces from being randomized, but you still end up with something that doesn’t have the same feel as if a trained human had done the same task.

I also agree with feels that one of the issues is the AI is working within a dataset so it’s not going to bring in an element of a new art style. One of the things human artists do is create new styles. We may get an “AI Art” Aesthetic in the next few years (between now and 2030?), but that will be a style inspired by the nature of primitive AI art - not something the AI created itself.

I do think you can tell apart the quality from Ai and humanly created art. Most of these pictures still need manual polishing to be used properly.

What my biggest question with these would be is what ressources have been used to create these.

1 Like
1 Like

The pt file allows the AI to learn additional painting styles and items.
A number of pt files are already public.

Do you have the same opinion about text-generating AIs such as GPT3?
Also, do you have the same opinion about crawlers such as googlebot, although it is not an AI?

Aside from the “am I able to recreate items in the dataset with this specific AI” being more specific to the current climate for digital art AIs, yes, my impressions are the same regardless of AI type.

And no, but I feel there are not many end function similarities to be found in comparing AI to crawlers.

I’m glad Stable Diffusion made it over to court, if not just to see how it all develops and hopefully get a clear answer.

AI and crawlers are the same in that they use other people’s content without permission.
Since you seem to be particularly concerned about that, I think that if you are negative about text AI like GPT-3, you must also be negative about crawlers like googlebot to be consistent.

If you are positive about GPT-3, I was about to ask why not DrawAI?

You seem to be negative on both GPT-3 and DrawAI.
If so, I would like you to explain why you are not negative about crawlers.
This is to know the consistency of your theory.

Crawlers scan other people’s content on their own and create different content.
That is similar to AI and I think it overlaps with your point at issue.

Excuse me, but… What are “crawlers”? I’ve never heard of that before

1 Like

As far as I understand a (web) crawler is a bot/program that ‘crawls’ through webpages and collects/indexes information.

The ones I’m (vaguely) familiar with are ones made to provide search engines with site/page information. Also internet archivers. I’m fine with those since they have a beneficial purpose and return to the original source.

I guess crawler purposes have since developed and diversified so you have can ones that go through sites with the purpose of collecting specific data in the name of market, academic, or personal interest. I’m more iffy on these because there isn’t always a mutual benefit while it costs your server.

Admittedly I had to look up the term, so if my take is wrong hopefully someone can correct me.

@7743 If you could give me a specific example of a crawler and how it takes other’s content and creates different content, that might help me clarify my stance.

I think my impression of various AIs/crawlers depends on the AI/crawler and is complicated and I don’t think I can work a discussion where it’s reduced to a total negative / positive. If you’d like to discuss the specifics to get a better idea of where I stand, I can try, but again, I would like to keep it within a reasonable scope.

1 Like

Don’t mean to interrupt the discussion but I’m just gonna pop in here real quick to put it in writing that I do not consent to the use of any sprites I create, splice, or edit in the training of image generation algorithms.


A crawler is like a googlebot.
It is a program that patrols websites and collects data.
It is thanks to this program that you can search on google.

It collects data from various sites, processes it, and uses it to reuse it for something.

Have you never used google?
Google is one of the companies that use programs to scan, analyze, and search other people’s websites without permission.

And they make a lot of money by scanning other people’s content without their permission. lol
There were directory-type search engines like yahoo in the early days, where you had to declare and get registered, but no one would use them now.

In other words, if you consider AI learning other people’s content without their permission to be a problem, you must complain about web crawlers that scan other people’s content without their permission as well, or the your theory will not be consistent.

I’m just gonna walk away at this point. I’ve already said what I meant to.

Okay, Good bye.
After all, isn’t the reason you are so negative about AI that you just don’t like it?
It seemed to me that there is no other theoretical explanation than this.