When I say “enhance” photography using AI, I don’t mean upscaling, color correction, or anything like that.
No, in this case, I’m instead using it to help me rescue an old photo that was perfect, aside from 2 failures when shooting.
8 years ago I came up with a promotional concept for my next blues record: A fake bourbon centered around the record’s title, One Bottle Down, Life To Go.
6 and a half years ago I shot the photo for it. It took a lot of work, and a lot of compositing to get it right, and even then..
..I hated it.
It was summer. It was hot in the studio. I had limited people available to model for me, and I figured I could “wing it” in my classic guerilla style.
Let me tell you.. What I wanted did NOT work for my guerilla style.
We didn’t spend any time on makeup/hair, figuring I’d just fix it in post. The sand I was using to create the dust/crumbling effect just wasn’t “blowing” right.. And, thanks to the heat, we didn’t spend enough time adjusting lighting on the model’s face, so it just came out wrong.
So, I shelved the piece, and also eventually shelved the record due to similar issues. Heat, a major power outage due to storms, and limited time meant the record was rushed and started on the wrong foot to begin with.
Today, I finally found my footing for both.
A couple days ago I decided to task my AI server with revitalizing the aforementioned photo. I experimented a bit, but ended up settling on using the AI to generate 2 parts: A new face, and better dust.
The rest of the photo came out perfect based on my goals, so there was no need to reinvent the whole thing.
Anyway, it took about 900 iterations to generate enough parts for the face for me to be satisfied; I selected 6 images from that 900 and composited them to create the face you see above.
I only did 200 iterations on the sand portion; That was easy, and I ended up using about 7 parts to create the composite dust.
Both were done using IMG2IMG on my Stable Diffusion server I run locally. I’m still running the 1.4 model for now; I’ve got 1.5 ready, I just haven’t switched over to it. Additionally, when I say iterations, I should clarify: I don’t literally run one image, look at it, and run another. I instead set it up to run 60 batches of 5 and let it run throughout the day/night, and dig through the results later. The sand iterations I got what I needed as it was generating, so I was able to cancel it and save some GPU cycles.
Once I got everything put together I did some minor cleanup overall, merged everything, and did some post-processing to get colors to match my current styles and add that classic high-ISO grain that I love to use in my photography (it actually worked out that I shot it at a low ISO originally and had cleaned up the noise as it allowed me to apply a uniform noise to the piece).
Then I added the typography and design element (rainbow circles around the hand), focusing on a vintage style that was my goal, and.. Boom.
The title of the new album, “Aged to Dust” is born.
Aged to Dust will feature numerous songs from my days as a street performer, recorded in a stripped down, dirty style with minimal instrumentation. It will also feature a couple never-before-heard songs that I’ve been writing.
I expect to complete recording late December, if not sooner, and publish late January/early February.