It was interesting to see how often the OpenAI model changed the face of the child. Often the other two models wouldn't, but OpenAI would alter the structure of their head (making it rounder), eyes (making them rounder), or altering the position and facing of the children in the background.
It's like OpenAI is reducing to some sort of median face a little on all of these, whereas the other two models seemed to reproduce the face.
For some things, exactly reproducing the face is a problem -- for example in making them a glass etching, Gemini seemed unwilling to give up the specific details of the child's face, even though that would make sense in that context.
I've noticed that OpenAI modifies faces on a regular basis. I was using it to try and create examples of different haircuts and the face would randomly turn into a different face -- similar but noticeably changed. Even when I prompted to not modify the face, it would do it regardless. Perhaps part of their "safety" for modifying pictures of people?
It's interesting to me that the models often have their "quirks". GPT has the orange tint, but it also is much worse at being consistent with details. Gemini has a problem where it often returns the image unchanged or almost unchanged, to the point where I gave up on using it for editing anything. Not sure if Seedream has a similar defining "feature".
They noted the Gemini issue too:
> Especially with photos of people, Gemini seems to refuse to apply any edits at all
Nano Banana in general cannot do style transfer effectively unless the source image/subject is a similar style as the target style, which is an interesting and unexpected model quirk. Even the documentation examples unintentionally demonstrates this.
Seedream will always alter the global color balance with edits.
Found OpenAI too often heavy handed. On balance, I'd probably pick Gemini narrowly over Seedream and just learn that sometimes Gemini needs a more specific prompt.
Honestly, I think it was misfounded. As an photographer and artist myself, I find the OpenAI results head-and-shoulders above the others. It's not perfect, and in a few cases one or the other alternative did better, but if I had to pick one, it would be OpenAI for sure. The gap between their aesthetics and mine makes me question ever using their other products (which is purely academic since I'm not an Apple person).
Is it me or ChatGPT change subtle or sometimes more prominent things? Like ball holding position of the hand, face features like for head, background trees and alike?
No, but this is the beginning of a new generation of tools to accelerate productivity. What surprises me is that the AI companies are not market savvy enough to build those tools yet. Adobe seems to have gotten the memo though.
In testing some local image gen software, it takes about 10 seconds to generate a high quality image on my relatively old computer. I have no idea the latency on a current high end computer, but I expect it's probably near instantaneous.
Right now though the software for local generation is horrible. It's a mish-mash of open source stuff with varying compatibility loaded with casually excessive use of vernacular and acronyms. To say nothing of the awkwardness of it mostly being done in python scripts.
But once it gets inevitably cleaned up, I expect people in the future are going to take being able to generate unlimited, near instantaneous images, locally, for free, for granted.
I've been waiting for solutions that integrate into the artistic process instead of replacing it. Right now a lot of the focus is on generating a complete image, but if I was in photoshop (or another editor) and could use AI tooling to create layers and other modifications that fit into a workflow, that would help with consistency and productivity.
I haven't seen the latest from adobe over the last three months, but last I saw the firefly engine was still focused on "magically" creating complete elements.
So far Adobe AI tools are pretty useless, according to many professional illustrators. With Firefly you can use other (non-Adobe) image generators. The output is usually barely usable at this point in time.
Artists no, illustrators and graphic designers yes. They'll mostly become redundant within the next 50 years. With these kind of technologies, people tend to overestimate the short-term effects and severely underestimate the long-term effects.
I like that they call openai’s image generator ground breaking and then explain that it’s prone to taking eight times longer to generate an image before showing it add a third cat over and over and over again
Using gen. ai for filters is stupid, a filter guarantees the same object but filtered, a gen. AI version of this guarantees nothing and an expensive AI bill.
It’s like using gen. ai to do math instead of extracting the numbers from a story and just doing the math with +, -, / and *
This seems to imply that the capabilities being tested are like the descriptive words used in the prompts, but, as a category using random words would be just as valid for exercising the extents of the underlying math. And when I think of that reality I wonder why a list of tests like this should be interesting and to what ends. The repeated nature of the iteration implies that some control or better quality is being sought but the mechanism of exploration is just trial and error and not informative of what would be repeatable success for anyone else in any other circumstance given these discoveries.
It was interesting to see how often the OpenAI model changed the face of the child. Often the other two models wouldn't, but OpenAI would alter the structure of their head (making it rounder), eyes (making them rounder), or altering the position and facing of the children in the background.
It's like OpenAI is reducing to some sort of median face a little on all of these, whereas the other two models seemed to reproduce the face.
For some things, exactly reproducing the face is a problem -- for example in making them a glass etching, Gemini seemed unwilling to give up the specific details of the child's face, even though that would make sense in that context.
https://www.reddit.com/r/ChatGPT/comments/1n8dung/chatgpt_pr...
I've noticed that OpenAI modifies faces on a regular basis. I was using it to try and create examples of different haircuts and the face would randomly turn into a different face -- similar but noticeably changed. Even when I prompted to not modify the face, it would do it regardless. Perhaps part of their "safety" for modifying pictures of people?
It's crazy that the 'piss filter' of openAI image generation hasn't been fixed yet. I wonder if it's on purpose for some reason ?
It's interesting to me that the models often have their "quirks". GPT has the orange tint, but it also is much worse at being consistent with details. Gemini has a problem where it often returns the image unchanged or almost unchanged, to the point where I gave up on using it for editing anything. Not sure if Seedream has a similar defining "feature".
They noted the Gemini issue too:
> Especially with photos of people, Gemini seems to refuse to apply any edits at all
Nano Banana in general cannot do style transfer effectively unless the source image/subject is a similar style as the target style, which is an interesting and unexpected model quirk. Even the documentation examples unintentionally demonstrates this.
Seedream will always alter the global color balance with edits.
Found OpenAI too often heavy handed. On balance, I'd probably pick Gemini narrowly over Seedream and just learn that sometimes Gemini needs a more specific prompt.
Seedream is the only one that outputs 4k. Last time I checked that is..
> If you made it all the way down here you probably don’t need a summary
Love the optimism
I skipped to the end to see if they did any local models. spoilers: they didn't.
Honestly, I think it was misfounded. As an photographer and artist myself, I find the OpenAI results head-and-shoulders above the others. It's not perfect, and in a few cases one or the other alternative did better, but if I had to pick one, it would be OpenAI for sure. The gap between their aesthetics and mine makes me question ever using their other products (which is purely academic since I'm not an Apple person).
Interesting experiment, though I'm not certain quite how the models are usefully compared.
You can always identify the OpenAI result because it's yellow.
And mid journey because it's cell shading:)
Also because it’s mid :)
we build our sandbox just for this use case, fal.ai/sandbox. take the same image/prompt, and compare across tens of models.
Everyday I generate more than 600 image and also compare them, it takes me 5 hours
Is it me or ChatGPT change subtle or sometimes more prominent things? Like ball holding position of the hand, face features like for head, background trees and alike?
It's not you. The model seems to refuse to accurately reproduce details. It changes things and leaves stuff out every time.
Are artists and illustrators going the way of the horse and buggy?
It's a cliche but "AI won't replace you, but someone who knows how to use AI will replace you" appears to be true.
There is no better example recently than
> AI comedy made by a professional comedian
www.reddit.com/r/ChatGPT/comments/1oqnwvt/ai_comedy_made_by_a_professional_comedian/
No, but this is the beginning of a new generation of tools to accelerate productivity. What surprises me is that the AI companies are not market savvy enough to build those tools yet. Adobe seems to have gotten the memo though.
In testing some local image gen software, it takes about 10 seconds to generate a high quality image on my relatively old computer. I have no idea the latency on a current high end computer, but I expect it's probably near instantaneous.
Right now though the software for local generation is horrible. It's a mish-mash of open source stuff with varying compatibility loaded with casually excessive use of vernacular and acronyms. To say nothing of the awkwardness of it mostly being done in python scripts.
But once it gets inevitably cleaned up, I expect people in the future are going to take being able to generate unlimited, near instantaneous images, locally, for free, for granted.
I've been waiting for solutions that integrate into the artistic process instead of replacing it. Right now a lot of the focus is on generating a complete image, but if I was in photoshop (or another editor) and could use AI tooling to create layers and other modifications that fit into a workflow, that would help with consistency and productivity.
I haven't seen the latest from adobe over the last three months, but last I saw the firefly engine was still focused on "magically" creating complete elements.
> Adobe seems to have gotten the memo though.
So far Adobe AI tools are pretty useless, according to many professional illustrators. With Firefly you can use other (non-Adobe) image generators. The output is usually barely usable at this point in time.
Yes and now. IKEA and co didn't replace custom made tables, just reduced the number of people needing a custom table.
Same will happen to music, artists etc. They won't vanish. But only a few per city will be left
Artists no, illustrators and graphic designers yes. They'll mostly become redundant within the next 50 years. With these kind of technologies, people tend to overestimate the short-term effects and severely underestimate the long-term effects.
I like that they call openai’s image generator ground breaking and then explain that it’s prone to taking eight times longer to generate an image before showing it add a third cat over and over and over again
Using gen. ai for filters is stupid, a filter guarantees the same object but filtered, a gen. AI version of this guarantees nothing and an expensive AI bill.
It’s like using gen. ai to do math instead of extracting the numbers from a story and just doing the math with +, -, / and *
This seems to imply that the capabilities being tested are like the descriptive words used in the prompts, but, as a category using random words would be just as valid for exercising the extents of the underlying math. And when I think of that reality I wonder why a list of tests like this should be interesting and to what ends. The repeated nature of the iteration implies that some control or better quality is being sought but the mechanism of exploration is just trial and error and not informative of what would be repeatable success for anyone else in any other circumstance given these discoveries.