I have been experimenting with using my own photography as a source image to influence image generation with Stable Diffusion.
The rationale behind this is straightforward — I have taken lots of photos over the years, but the conditions under which they were taken were not always ideal. Photography is subject to a myriad of influencing factors, and even with a specific vision in mind, nature may not cooperate to allow that vision to be captured perfectly. Similarly, the ideal setting we imagine might simply not exist in reality.
That’s where the power of AI comes in. I’ve discovered its phenomenal ability to materialize the concepts I feed into it. A potential point of contention is that since these images are AI-generated, they might be perceived as the product of AI rather than my own creativity. This viewpoint occasionally arises among those unfamiliar with AI work. I personally disagree, but I’ll reserve that discussion for a separate post.
Considering these images are the fusion of my original photographs and my descriptive text prompts, the notion that they’re devoid of human creative input seems baseless. As such, in my opinion, they are completely entitled to copyright protection.
I plan on experimenting this further, exploring diverse series and themes. My archive is filled with photographs that I find intriguing but seem to be missing that final touch to elevate them. I’m optimistic that by leveraging text prompts, I’ll be able to address these shortcomings and conceive new creations that blend my photography and the power of imaginative prompting.
Stable Diffusion with SDXL, using my own photography as source image.