AI Generated Graphics

ZooHead

Active member
Messages
5,635
Although Cheetah 3D doesn't have AI, PS does
and I've been wondering if I'll have any use for it.

My first question is, how good are the images produced.
Second does it violate some personal creative standard.

Having the need for an image of a red apple,
I ran through the possible ways I could get one.
• Look on the internet
• Take a picture myself
• 3D model it
• Use Generative AI in PS

I don't like to take images from the web unless there's clearly no copyright violation.
I could take the picture myself but it would take time, and I don't have any apples.
3D modeling it would be fun, but take way to much time.

That leaves me the PS AI process, and here's how that goes:

Make a selection, type in a description- "Whole red apple"
PS gives three versions to choose from but you can keep regenerating.
You can apparently "train" the AI in depth if you want.

Done in no time!

Here's the project I needed the apple image for:

a_is_for_apple.gif



I also wanted a blue bowl of blueberries:
Blueberries 2.jpg


I'm very impressed with the images I finally chose but I did have regenerate a number of times.
 
Hello,

What I believe you are seeing returned to you from PS in your experiments are actually photographs, They are not really generative works.

Adobe has thousands of photographs of thousands of common objects/items/critters etc. Your request for an apple was a request for a single, very common object. It was delivered to you with the background knocked out. Your request for blueberries in a bowl… again a single request. It is certainly possible/probable that the alpha channel for the images returned to you were determined using machine learning tools.

If you were to request an apple floating in the air over a bowl of blue berries that is sitting on a weathered wooded table on a beach on the edge of a jungle with waves breaking on the outside edge of an azure lagoon… That resulting image would be what the kids are calling generative.

Thanks for all your non-artificial contributions to using C3D!

cheers,
gsb
 
If you really want to see AI at work in PS, use it for extending pictures or removing subjects in a pic and replacing it with something else.

The other way to use it would be for things like "dinosaurs at a tea party." It can be fun.

Bob
 
If I use Generative Fill in PS, it would be disappointing if it where
someones photographic work and might constitute false advertising.

I would hope that the Stock Images would be a separate thing.

Does PS use Firefly for ai?
I read that if ai images are generated by a beta version
of Firefly they can't be used for commercial purposes.
 
After a little more research I found it is Firefly that is used,
and an internet connection is required as it is server based.

It also supposedly creates an entirely new image based on their stock images, but who knows.
I might use this for an apple or a baseball, but a dinosaurs at a tea party I would have to create myself.

I don't want to use Generative AI to make creative decisions.
But for the mundane processes, like an image gopher, it seems very helpful.
 
You might try out Diffusion Bee. It's free and works on your local machine. No need for an Internet connection or having the image processed on an outside server. https://diffusionbee.com/

It comes with a few different training models. For a while you could download different ai training models from Civit.ai by signing up. But they may have locked some of that down. I've mostly used it for brainstorming some ideas for graphic design project that I then draw the way I always have. Most of my stuff is vector artwork done in Illustrator. I could see where these would be helpul in generating UV maps. Especially something like a non-repeating wood texture. https://civitai.com/
 
Thanks Swizl, that sounds interesting due to the fact that Adobe
PS AI uses credits and you get 500 with a PS license.

Which means you have to pay when the credits run out.
 
For 3D models, you can try Luma AI Genie. (This is the AI I used to make Dumpling the bear and the dancing rabbits). Not always perfect, but often pretty good, for example for background objects you need quickly, without much effort. You can also generate the mesh in quads in low, med and high. It's free, but I don't know how many tries you have at the moment.
I had it running via discord, but it no longer works there - at least for me. But it still works directly on the site:

The models only look good once you have created a hi-res. But that can take a while.

I'm currently planning something with circus. For this purpose, the characters don't have to be so perfect. I was thinking of an artistic squirrel, a harlequin dog and a ringmaster rabbit.
On closer inspection, you can see that it's not that great, but enough for me in this case:

harlekin2.pngringmaster.pngmanege.jpg
 
Thanks Swizl, that sounds interesting due to the fact that Adobe
PS AI uses credits and you get 500 with a PS license.

Which means you have to pay when the credits run out.
Ah ok, I didn't know that the PS thing had a number of credits. I've only tested the AI functions in PS and Illustrator. But they aren't super helpful for the technical type of illustrations I do anyway.
 
For 3D models, you can try Luma AI Genie. (This is the AI I used to make Dumpling the bear and the dancing rabbits). Not always perfect, but often pretty good, for example for background objects you need quickly, without much effort. You can also generate the mesh in quads in low, med and high. It's free, but I don't know how many tries you have at the moment.
I had it running via discord, but it no longer works there - at least for me. But it still works directly on the site:

The models only look good once you have created a hi-res. But that can take a while.

I'm currently planning something with circus. For this purpose, the characters don't have to be so perfect. I was thinking of an artistic squirrel, a harlequin dog and a ringmaster rabbit.
On closer inspection, you can see that it's not that great, but enough for me in this case:

View attachment 40085View attachment 40086View attachment 40087
Interesting. I'll have a look. Cute animals too. :D
 
Thanks @Lydia, looks pretty cool despite the drawbacks you mentioned.
Might be real useful for story-boarding/brainstorming.

@Swizl So many AI options all of a sudden, and how long will the free ones be free.

I doubt I'll use all 500 PS AI credits anyway, even if they are not renewed every month ( I don't know if they are).
 
Thanks @Lydia, looks pretty cool despite the drawbacks you mentioned.
Might be real useful for story-boarding/brainstorming.

@Swizl So many AI options all of a sudden, and how long will the free ones be free.

I doubt I'll use all 500 PS AI credits anyway, even if they are not renewed every month ( I don't know if they are).
I agree. Some business models are set up to get people hooked on a service, and then eventually the subscription gets rolled out. At least with Diffusion Bee, you'd still be able to use the last version that worked on your machine if that ever happens.

Curious if the 500 credits roll over every month. I don't use it enough to even come close to finding out. I'm still mostly using content aware fill and then cleaning it up manually using the brush and stamp tool to remove background items. (shakes fist at clouds)
 
Just looked it up, It's 250 credits and it renews every month.

Adobe says: "The consumption of generative credits depends on the generated
output's computational cost and the value of the generative AI feature used".
 
Hello,

What I believe you are seeing returned to you from PS in your experiments are actually photographs, They are not really generative works.

Adobe has thousands of photographs of thousands of common objects/items/critters etc. Your request for an apple was a request for a single, very common object. It was delivered to you with the background knocked out. Your request for blueberries in a bowl… again a single request. It is certainly possible/probable that the alpha channel for the images returned to you were determined using machine learning tools.

If you were to request an apple floating in the air over a bowl of blue berries that is sitting on a weathered wooded table on a beach on the edge of a jungle with waves breaking on the outside edge of an azure lagoon… That resulting image would be what the kids are calling generative.

Thanks for all your non-artificial contributions to using C3D! I recently discovered Overchat AI soft, with the help of which I can find answers to absolutely all my questions.

cheers,
gsb
You raise a good point about the nature of what we're seeing in Photoshop. It sounds like the images you're referring to are indeed stock photography that's been processed in a way to fit specific requests (e.g., with backgrounds removed), rather than being fully generative works created from scratch. When you make simple requests for well-known objects, such as an apple or blueberries, it's likely pulling from a vast library of pre-existing photos and then doing some AI-driven manipulation like background removal.


On the other hand, as you mentioned, when you ask for something more complex and unique — like the apple floating over blueberries on a beach — that's when you truly start to see the generative capabilities come into play. Creating an entirely new scene from scratch with these specific elements is where AI really shines and can create something that's never been seen before.


Thanks for sharing your thoughts, and I really appreciate the insights you've contributed to the community as well. Cheers!
 
This guy uses Photoshop generative backgrounds like traditional matt painting for video. That is he extends his video canvas/backgrounds using generative AI to make them epic / cinematic. It's an interesting use case and worth watching. It may be handy for extending your own 3D modelled scene.
 
This guy uses Photoshop generative backgrounds like traditional matt painting for video. That is he extends his video canvas/backgrounds using generative AI to make them epic / cinematic. It's an interesting use case and worth watching. It may be handy for extending your own 3D modelled scene.
Yes, if the background is relatively static, you can do this by creating a single new or modified frame, which then becomes the new background. The person is then copied in with a soft mask so you don't recognize where it blends with the background . The trick is to track the original footage beforehand and then apply this movement to the newly created/retouched frame. Then it looks like it's moving because the camera movement is imitated.
However, it becomes (lot) more difficult and time consuming when there are dynamic movements in the scene, or when there are objects in the image that obscure the person in some places or in some time: The masking is then much more complex.
 
Back
Top