Combining forces for artificial art

TL:DR – It is possible to use ChatGPT to generate a novel prompt that can be used to prompt an AI image generator to make novel images.


I asked ChatGPT to act as an expert user of AI image generation bots and told it to write an evocative prompt that evokes music and science flowing together with multiple artist and style references. Adding that it should also incorporate commands to ensure the image is photorealistic and high-defintion.

“Imagine a futuristic symphony where the notes are composed not just by a human maestro, but by an AI conductor merging the musical influences of Beethoven, Bach, and Skrillex. The performance takes place in a shimmering, high-tech concert hall that blends elements of the Sistine Chapel and the inside of a particle accelerator. Visualize a fusion of classical and electronic music, where the instruments themselves seem to be made of light and energy. The musicians are performing in harmony with a mesmerizing light show, creating a cosmic ballet of sound and color. This is the art of music and science intertwined, a celebration of the beauty of both human and machine creativity.

Please generate the most photorealistic, high-definition, high-resolution image possible, using state-of-the-art AI image generation techniques. Pay close attention to the details, including the intricate textures of the instruments and the concert hall, the colors and patterns of the light show, and the expressions of the musicians as they perform this futuristic symphony. Make sure the image captures the essence of music and science merging together in perfect harmony.”

These are the first-pass results:

Artificial Art

TL:DR – I have pinned examples of the images I have generated with the Mid Journey tool to my Pinterest page.


Regular readers will know that I have been somewhat distracted with what people colloquially refer to as AI bots recently, mainly the language model ChatGPT and the image generator Mid Journey. Neither of these tools actually represent artificial intelligence, they are just text and image manipulators.

I have used output from both in various articles here and have pinned the best images on my Pinterest page. A select few are displayed below. I’ve included the prompts I used for each as a comment on each entry in the Pinterest gallery.

Escher's Octopuses
Escher’s Octopus
The Death of AI
The Death of AI
Incoming Comet
Incoming Comet
Off-roading
Off-roading
Not Peter Gabriel Masked
Not Peter Gabriel Masked
Art Masked
Classical Masks
Marilyn Masked
Marilyn Masked

Ironically, I asked ChatGPT to tell me about generative art, this is the generative text it came up with in response:

Generative art is a form of art that is created with the use of algorithms, machine learning, and artificial intelligence. It involves using a computer program to generate or manipulate visual, auditory, or textual content.

Generative art can take many forms, including digital graphics, animations, music, and even physical installations. The process of creating generative art involves defining a set of rules or parameters that the computer program will use to generate the artwork. These rules can be very simple or very complex, depending on the desired outcome.

One of the main benefits of generative art is that it allows artists to create complex, intricate patterns and designs that would be difficult or impossible to create by hand. Additionally, generative art can create unique, one-of-a-kind pieces that are impossible to replicate exactly.

There are many tools and software available to create generative art, including Processing, OpenFrameworks, and Max/MSP. These programs provide artists with a wide range of tools and features to experiment with, allowing them to create truly unique and innovative pieces of art.

Overall, generative art is a fascinating and rapidly evolving field that combines art, technology, and artificial intelligence. As more artists explore the possibilities of generative art, we can expect to see even more exciting and innovative creations in the years to come.

There are several AI tools that can be used for generative art. Here are a few examples:

  1. Neural Style Transfer: This is a technique that uses deep neural networks to apply the style of one image to another. Artists can use this tool to create new artworks by combining different styles and images.
  2. GANs (Generative Adversarial Networks): GANs are a type of deep learning model that can generate new data that is similar to a training dataset. Artists can use GANs to create new and unique images, videos, and even 3D models.
  3. Evolutionary Algorithms: Evolutionary algorithms are a type of optimization algorithm that simulates natural selection to find optimal solutions to a problem. Artists can use evolutionary algorithms to generate new designs or patterns by iteratively selecting and mutating the most promising candidates.
  4. Processing: Processing is an open-source programming language and development environment that artists can use to create generative art. It provides a simple yet powerful set of tools for creating graphics, animations, and interactive installations.
  5. OpenFrameworks: OpenFrameworks is another open-source toolkit for creative coding that is widely used in generative art. It provides a wide range of features for working with audio, video, and graphics, and can be extended with add-ons for machine learning and other advanced features.

However, while MidJourney and DALL-E are examples of AI-powered tools that are used for creative purposes, they have specific applications and use cases that differ from generative art.

MidJourney is an AI-powered tool that allows users to create unique, custom illustrations using natural language inputs. It is primarily designed for use in marketing and branding, where businesses may need to create custom illustrations for their products or services.

DALL-E is an AI model developed by OpenAI that can generate images from textual descriptions. It is similar to other GANs but is specifically designed for creating unique and original images based on textual inputs. Its primary use case is in generating images for research, design, and storytelling purposes.

While these tools may use some of the same underlying AI technologies as generative art, they are designed for specific purposes and have different features and capabilities. Generative art, on the other hand, is a broader field that encompasses a wide range of creative expressions and applications, from visual art to music and beyond.

ChatGPT as literary editor

TL:DR – I asked ChatGPT, the language model, to critique and edit one of my short stories with some success.


A couple of years ago I wrote a little clutch of gothic short stories. I published them on Sciencebase. Some of them were read by reasonable numbers of visitors to the site. But, it occurred to me this morning that the AI language model, ChatGPT, might be able to offer me some insights into my creative writing and whether or not the short stories might be worth more effort on my part to see whether I could get them published commercially.

As such, I offered up one of the short stories (Off-roading) with the following prompt to the bot:

“You are to act as an expert literary editor compiling a book of short stories with a gothic twist. You should analyse the story presented to you for literary style, reading level, narrative and nuance and them make a decision, which you will explain, as to whether or not the short story should be included in the book.”

This is what ChatGPT had to say about the short story:

This short story by David Bradley is an evocative narrative about a road trip, described with vivid and sensory language. The writing style is engaging and leans towards a more informal, conversational tone. The story has a melancholic, existential undercurrent and is punctuated with moments of wonder and humor.

The narrative structure of the story is chronological, with a clear beginning, middle, and end. The setting and scenery are well described, giving the reader a strong sense of place. The protagonist’s experiences and emotions are central to the story and are described in a manner that is relatable and engaging.

While the story is not gothic in the traditional sense, its themes of existentialism and the transitory nature of life can be seen as having a gothic nuance. The final scene of the story, with the car’s deviation into boulders, can be seen as a symbol for the unpredictability of life and the uncertain future.

In conclusion, the short story “Off-roading” by David Bradley would be a suitable inclusion in a book of short stories with a gothic twist. The writing style is accessible, the narrative is well crafted, and the themes explored are both evocative and thought-provoking.

I then prompted the bot to suggests edits to tighten the text but the majority of its suggestions simply removed any of the poetry in the writing. I changed the prompt to make it act as an editor with a more poetic inclination. It still didn’t work. However, I can see in my own re-reading of the story that some of the sentences could be broken down a little and a few typos corrected. So, I’ll do those manually before I think about submitting to a real literary editor…

Meanwhile, the image above was generated by Mid Journey with the following prompt aimed at creating an illustration for my short story

A slightly battered mid-price car, careering off a dirt track at night into a boulder-strewn field lined with trees somewhere in the American mid-west. Detailed sketch in the style of 1950s Americana magazines.

How to get real advice from your artificial careers advisor

TL:DR – It is possible to prompt ChatGPT to provide careers advice, and while its output seems staid and obvious it could be used as the basis for further exploration or discussion with a careers advisor.


Bots, such as the now well-known ChatGPT, are really just sophisticated autocomplete machines…instead of suggesting the next word for your text message though they generate the most statistically likely set of words to follow your prompt. Indeed, even ChatGPT agrees: “Yes, that’s a simple way to describe what language models like ChatGPT do. They use statistical patterns learned from large amounts of text data to generate text that is coherent and consistent with the input prompt. The goal is to generate text that resembles human-like language and can be used for various natural language processing tasks, such as text generation, conversation, translation, and more.

So, how do you get the best autocomplete from a prompt? Well, here’s an example. I prompted ChatGPT with the following phrase:

“You are to act as expert life coach and to advise me on work-life balance, adding details about what I do for a living and my two main creative hobbies, photography and music.”

MidJourney generated a careera advisor for me
AI Careers Advisor

This is what my AI careers advisor suggested:

Set clear goals: Determine what you want to achieve and set specific, measurable, and achievable goals.

Prioritize: Make a schedule and prioritize your time to ensure that you are dedicating enough time to the pursuits that interest you.

Be efficient: Utilize tools and techniques to make your workflow more efficient, so you can get more done in less time.

Network: Connect with other creatives in your field, attend events and workshops, and share your work on social media to build a following and gain exposure.

Take care of yourself: Maintaining a healthy work-life balance is essential to avoiding burnout and staying motivated. Make sure to take breaks, exercise, and get enough sleep.

Be patient: Building a successful creative career takes time and effort. Stay motivated and keep working towards your goals, even when progress seems slow.

I then asked it how I might set my goals:

When setting goals, it’s important to start with a clear understanding of what you want to achieve. Here are a few tips for determining your goals:

Reflect on your passion and interests: What do you enjoy doing in your free time? What are you passionate about? Your goals should align with your passions and interests.

Assess your current skills: What are you currently good at? What do you need to improve? Your goals should challenge you to develop new skills and improve your existing ones.

Consider your long-term vision: Where do you want to be in 5 or 10 years? Your goals should help you work towards that vision.

Research your field: Look at what other successful creatives in your field are doing and what they have accomplished. Use that as a guide to set realistic goals for yourself.

Be Specific, Measurable and Achievable: Make sure your goals are specific, measurable and achievable, so you know exactly what you need to do to achieve them.

Make them time-bound: Give yourself a deadline to achieve your goal, so you have a sense of urgency and stay motivated.

Remember that your goals don’t have to be set in stone, and you can always adjust them as you learn more about yourself and your interests. It’s important to have a clear idea of what you want to accomplish but also be open to new opportunities and be flexible with your plans.

Some people pay consultants and advisers a lot of money to come up with stuff like this…

How real is artificial mindfulness?

TL:DR – In a perhaps contradictory experiment, I have used ChatGPT to offer readers some advice on mindfulness.


The chat about chatGPT is endless and intriguing. I caught sight of a debate on social media involving ministers and other churchy people about whether they could use the AI language model to write a sermon…or even a prayer. There was recognition that it was almost certainly happening. So, I thought…I wonder if chatGPT could produce something on mindfulness, a guide for those who wish to find some kind of enlightenment.

Mid Journey image: "A photorealistic cartoon of a seated woman in yoga attire in a peaceful natural, sunny setting by the sea surrounded by flowers and being mindful"
Mid Journey image on being mindful

This is the prompt I used:

You are an experienced mindfulness practitioner with a lot of wisdom and knowledge about the human condition. Give me some line by line advice for a novice who could use that advice as a guide on their life’s journey

This is how the bot replied:

Start small and be consistent: Begin with simple mindfulness practices, such as focusing on your breath for a few minutes each day, and make sure to stick with it. Consistency is key.

Notice your thoughts and feelings without judgement: Observe your thoughts and feelings without getting caught up in them or reacting to them. Simply acknowledge them and let them pass.

Find a balance between acceptance and change: Accept the present moment as it is, but also strive to make positive changes in your life.

Cultivate self-compassion: Treat yourself with the same kindness and understanding that you would offer to a good friend.

Connect with others: Nurture positive relationships and build a support system.

Practice non-attachment: Let go of attachments to material things and focus on what truly matters in life.

Find meaning and purpose: Reflect on what gives your life meaning and strive to align your actions with those values.

Cultivate gratitude: Regularly practice gratitude and appreciation for the good things in your life.

Learn to live in the present moment: Let go of regrets about the past and worries about the future.

Remember that growth is a process: Mindfulness is a journey, not a destination. Be patient and kind to yourself as you continue to learn and grow.

I then asked the bot for an executive summary of the above:

Mindfulness is a journey of self-discovery and growth. By practicing consistency, non-judgement, balance, self-compassion, connection with others, non-attachment, meaning and purpose, gratitude, living in the present moment, and being patient and kind to yourself, you can cultivate mindfulness, gain insight into your thoughts and emotions, and improve your overall well-being.

The image in this article was created by AI Mid Journey using the following /imagine prompt

“Photorealistic cartoon of a seated woman in yoga attire in a peaceful natural, sunny setting by the sea surrounded by flowers and being mindful”

This post almost ties in with a recent post I wrote without the assistance of the ChatGPT tool about using one’s internal patience app to improve one’s wildlife photography.

Picturing new art with the artificial paintbox

TL:DR – Some of my experiments with so-called AI for creating a series of interesting images with various themes and styles.


I have recently been using ChatGPT, the language model, to help me come up with different types of headlines for articles. I haven’t used any that it has suggested, but I have prompted it with the text of my articles and ask it to act as a sub-editor on a particular type of publication.

AI image of masked Marilyn Monroe in the style of Andy Warhol
Masked Marilyn made by MidJourney

The headlines it devises are always interested, I usually ask it to tweak in a particular way, and usually then re-word it to my own taste. So, for this article, it suggested: “Exploring the Possibilities of AI-Generated Photorealistic Images” for that’s the content to follow this. I asked it to make the headline shorter and use a pun and it replied: “Painting a New Picture with AI-Generated Art”. That’s not bad. Another tweak: “AI Canvas for Next-Generation Photorealistic Painting”.

Its suggestions led me to my own headline: “Picturing new art with the AI paintbox”.

AI generated images in the style of Frida Kahlo wearing a covide mask
Frida Kahlo masked by MidJourney AI

Okay. So, like I say, I’ve been experimenting with AI. A while back I did some Wombo Art, which you may have seen on here. Photorealistic output but surreal. The AI world has moved on apace with the likes of Dall-E (pronounced like Salvador’s surname), that generates weird images too. I’ve not delved into Stable Diffusion yet, but I did take another look at MidJourney this week, which functions via a server on Discord. I was inspired to try it by a popular social media update this that showed four portraits, purportedly of people painted during the renaissance, but who had more than a passing resemblance to the actors from the BBC comedy Blackadder. Indeed, they were created with prompts that mentioned Rowan Atkinson, Tony Robinson, Stephen Fry, and Hugh Laurie. They were very good.

I thought I’d like to create something not dissimilar but with a different twist. I prompted MidJourney to generate various well-known, classic portraits of women but where they are all wearing a covid facemask.

The first prompt I tried was to recreate Vermeer’s Girl with the Pearl Earring but wearing a mask, then Leonardo’s Mona Lisa, Botticelli’s Birth of Venus, Whistler’s Mother, Warhol’s Marilyn Monroe, and an AI self-portrait of Frida Kahlo (my personal favourites).

AI generated versions of classic female portraits where the subjects are wearing covid face masks. Girl with the pearl eaarring, Frida Kahlo, Birth of Venus closeup, Mona Lisa.

AI Whistler's Mother in a face mask
Whistler’s Mother was not too happy in her care home

The thing to remember is that none of these were images anywhere in the world before I gave the AI a text prompt to trigger it to generate them. It has been “trained” on a large data set of images and “knows” about artists, their paintings, and many different aspects of culture and the world around us and draws on these [pardon the pun] to generate an entirely new image.

I also came up with various other prompts such as asking for a renaissance portrait of David Bowie. This was the best of several attempts

AI Renaissance David Bowie image
Renaissance Bowie

You can also prompt it with an actual image. So I did and asked it to create a cartoon caricature from my website photo. This AI triptych could almost be a portrait of me as a young man after I first cut my hair short, the middle one could’ve been me in my late 40s, and that last one is perhaps me in 20 years time!

AI portrait of the writer as a young, older, and old man
AI portrait of the writer as a young, older, and old man

I also fed MidJourney a photo of my band C5 performing live and asked it to generate a photorealistic cartoon image…not entirely sure what its “thought” processes were in this case.

Weird AI interpretation of a photo my my band C5
Weird AI interpretation of a photo my my band C5

There are endless possibilities, going back to the mask theme, I prompted the AI to create a picture of rockstar Peter Gabriel in a fantastical covid mask. The results, as reader Keith Walker suggested, are very Silence of the Lambs on Broadway.

Peter Gabriel or Hannibal Lector on stage AI images
Peter Gabriel or Hannibal Lector on stage

For a serious article about assistive technology on smart phones for the partially sighted I prompted the AI to generate an appropriate image

AI sketch of a young, partially sighted woman using assistive tech on her smartphone, in a busy city street
AI sketch of a partially sighted woman using assistive tech

I did so much messing around with MidJourney this week, that the software told me I’d used up all my free trial credits and I must now subscribe to carry on using it…

 

 

How to add an emoji to your Mastodon username and why you shouldn’t

TL:DR – It is relatively easy to add an emoji to your username on some Mastodon instances, but such silliness reduces the experience for some people who use screen readers.


UPDATE: It’s worth considering the fact that many people use screen readers and that emoji and other such silliness are “read” out sequentially be this type of software. So, pretty as the emoji might look, you have to also consider how it will sound. See also hashtags (which should always be put at the end of any text so that the person using the screen readers gets them after the main point rather than within a block of text).

If you’re on Mastodon and noticed that several people you follow have an emoji next to their username when you view their profile or their updates, you might be wondering how to add one yourself. At the time of writing, I have the Mastodon icon next to my username on my account.

David Bradley's Mastodon header, showing photo of David and his name with the sciencebase.com URL displayed

Well, to make this work you need to check what emoji are enabled on your instance/server. On mine, mastodon.social, I can open the following URL in a browser – https://emojos.in/mastodon.social – and it displays all the available emoji. Alongside each emoji is a keyword flanked by two colons. It is this bit of code that you need to add next to your username in your profile settings. If you’re on scicomm.xyz the link to use would obviously be https://emojos.in/scicomm.xyz to display the available emoji code.

Once you have your chosen code, go to the edit profile page for your account and paste or type the appropriate bit of code to display the emoji you want, and save your settings.

Various emoji and the codes needed to use them on Mastodon

Some Mastodon instances allow more emoji than others, and their lists do vary. mastodon.science does not make any available. Some instances, such as med-mastodon.com allow you to use past the following – :verified: – next to your name and it then displays a “blue tick” next to your name to show you’re verified. Of course, you’re not actually verified, there is no verification of this sort on Mastodon. The only genuine verification is to include a link to your own website where you have pasted the appropriate html code to prove that you have access to the back-end of the website and so can edit it.

You can find other Mastodon instances ranked by popularity in my sphere here.

Mastodon logo

Saving us from artificial intelligence

TL:DR – Artificial intelligence (AI) ain’t what it used to be. Back in the day, the notion of AI meant more than algorithms and language models trained on data sets, it encompassed the idea of machine sentience. Now, it seems it is simply used to refer to various tool, such as ChatGPT and MidJoOurney. These tools have no intelligence, artificial or otherwise.


If you’ve been following developments in technology over the last few months, you can’t fail to have noticed the advent of the buzz phrase artificial intelligence. There are tools that can take a text prompt and respond with a realistic face, a painting in the style of a well-known artist, or a Shakespearean poem.

Some of these tools, such as ChatGPT, aren’t AI, they’re simply generative language models, which I have mentioned several times before in recent weeks, can translate and edit your text, convert computer code in one language into another, they can even generate working computer code given an appropriate prompt. They can write news articles, essays, even research papers. Given suitable input.

There are concerns in many quarters that people will use such tools to cheat at school, in their jobs, and to generate digital content that is not strictly authentic. There is also the question of who owns the copyright to content generated by AI.

I would suggest that educators, publishers, gallery owners, and others need to quickly catch up with the technology. It is here, it is now. It is not going away. We all need to adapt to these new tools and recognise that we cannot ignore them just as we could not ignore the invention of the world wide web back in the late 1980s and its introduction to the world in the early 1990s. (See also the internet before it, the television, radio, the telegraph, the printing press, cave paintings, the hand axe).

At the moment, a keen eye can quite readily detect AI output, but the tools are being refined and becoming more sophisticated with every iteration. We need to address the concerns about plagiarism and copyright especially in education and research, but also in the creative industries and other realms of human endeavour.

New tools are needed that recognise AI output and these are in development. But, as with any new tool, we will continue to use it and society will ultimately accept that some content is generated this way rather than handwritten off-the-cuff with pen and ink.

It is likely that search engines will be quick to incorporate technology that recognises AI content on websites and perhaps allows the search engines to lower the rank of such content or otherwise penalise it. Similar tools will become available to educators, just as plagiarism-checking software was developed to reveal where students had lifted content from a website without citation. Conversely, the AI developers could incorporate systems into their tools that “watermarks” the output in some way, so that it might look authentic, but a quick scan of the text or whatever would reveal the watermark and betray the user who claimed the content as their own.

On that point, at the moment, I’d see the AI tools like ChatGPT and Stable Diffusion as being text manipulation tools. One has to craft a very specific prompt to generate the particular output one receives. A different, but similar prompt will not get the same output from the AI. The creativity, the authenticity, is in crafting ones’ prompt and then in the further processing, editing, and manipulation of that output to make your final product.

Of course, it is then down to the creative to decide whether to declare the tools they used to generate their content. Does every writer declare which word processor they use, which search engines and databases they trawled for information? Does every photographer mention which photo editor they used to adjust curves and levels and to crop their photos? No, they generally don’t, unless they’re offering advice or a tutorial. Of course, photos and documents can have meta data (the EXIF data in a digital photo for instance) and so perhaps a similar chunk of meta could be incorporated into AI output so that interested third parties could check the processes used to create the content. The meta data could include the original prompt, the specific AI tools used, and details of subsequent edits made by the creative.

Now, the question remains…did I use AI to generate the content above…or was it all off-the-cuff and handwritten with pen and ink?

Spoiler alert: It’s entirely original, it was written off-the-cuff and on-the-fly but on a laptop keyboard in WordPress on the sciencebase.com site.

Algo Bait

TL:DR – I tried to coin a term for the kind of content that is designed to game algorithms.


First, there was years of keyword stuffing by spammy websites trying to get up the search engine results pages (SERPs), then there were text link advertising and guest posts that attempted to game the search engines in the name of search engine optimisation (SEO ). Then it was more about clickbait, make a headline attention-grabbing enough (regardless of the actual quality of the content to which it points), people visit those and share those and the spammers attempt to go viral.

Now, it’s Algo Bait, algobait, material created, headlined, and linked in order to game the social media algorithms. It should all be avoided, it’s all BS.

I asked ChatGPT to explain the term algo bait, this is what it had to say about a term I thought I’d only just coined!

Algo bait is a neologism used in the context of online marketing and advertising. It refers to a technique used to attract algorithmic attention and improve a website’s ranking on search engines, social media, and other online platforms.

Similar to “click bait,” which refers to sensational or provocative headlines designed to attract clicks, “algo bait” refers to content that is designed to attract algorithmic attention. This can include specific keywords, meta tags, and other optimization techniques that are intended to increase the visibility and ranking of a website on search engine results pages (SERPs).

In short, algo bait is a technique used to optimize online content to attract the attention of algorithms and improve a website’s ranking, visibility, and ultimately drive more traffic to a website.

Ticking clocks

TL:DR – My songwriting process for a recent composition entitled Ticking Clocks. Stream or download it here.


About a week ago, I mentioned that I was putting together a new song. I’d recorded a so-called 1+1 demo (just me singing with Martin guitar). But, earlier this week I hooked up with Adam, the drummer from my band, and we worked on the arrangement in his ad hoc studio with me on Taylor guitar (DI, direct injection, into the desk, U-phoria UMC404HD) and he on drums (electronic drums with an acoustic kit setting, also DI into my desk).

Once we had things just about right and had recorded a decent take of the drums and guitar, Adam added some nice retro 70s synth (the synth sound is called “Chick Corea”). I then returned to my home “studio” to record vocals (MXR mic), to add some electric guitar (Ibanez RG, EQ’d to a jazzy tone), Yamaha bass, and even a bit of tuned percussion in the form of MIDI glockenspiel played on my Akai keyboard (MPK mini).

I then mixed down the multiple tracks into the final song using one of the less well-known, but just as good digital audio workstations (DAWs), Acoustica Mixcraft. There’s really no need to break the bank on Pro Tools, Logic or whatever at this level. After all, most of the heavy lifting is done by the VST (virtual studio technology) plugins that even the professionals use in their pricey software. Mixcraft is basically GarageBand for Windows.

I then worked up a photo of a broken clock taken by Adam to create this montage with the title of the song and our names. Oh, and I recorded the ticking of a working clock for the very end of the track.

Artwork for Ticking Clocks song showing a clock with broken hands

Lyrically, I am still thinking this is about the young woman in The Beatles’ song She’s Leaving Home. We never knew for sure why she left, but it seems there were issues. My song is almost a sequel to that song and perhaps we can imagine she is called Luka, as in the Suzanne Vega song, perhaps even the suitcase in my song is the very same suitcase that was ‘pulled from under the bed’ in the Squeeze song Another Nail in my Heart. Now she’s back at her childhood home and they are gone. However, you take it, The Beatles, Suzanne Vega, Squeeze and perhaps (as Adam pointed out) even The Style Council were influences on the final song, and a touch of Rush with the intro, I confess.

Ticking Clocks

The key turns in the lock
she pushed the door wide open
with her suitcase. It’s a shock
to find herself back here once again

She locked the secrets deep inside
Unanswered prayers her woe betide

She finds the clocks
They stopped the day she stepped away
From the cruellest of hard knocks
and all the fear that it revealed

The silent screams, the sound that mocks
Those echoes drowned by ticking clocks

No need to knock
She grasps the letters from the floor
You know she’s taking stock
And gasps to know the secrets
they don’t hold her any more

The key’s inside the box.
She didn’t need to worry
About secrets locked up deep inside
Unanswered prayers her woe betide

She winds the clocks
And knows the secrets they can’t hold her
The cruellest of hard knocks
and all the fears that they concealed

She locked the secrets deep inside
Unanswered prayers her woe betide

She faced the shocks
And knows the secrets they can’t hold her
The hardest of hard knocks
And all her fears now gone forever

The latest iteration of my song Ticking Clocks is available to stream for free on BandCamp, or name your price and you can download it.

You will hopefully listen all the way through and hear the proggie addition of a ticking clock at the end that has a growing reverb bloom and pans away from centre…well, a friend mentioned that he liked that addition to the song and assumed it was some grand long-case clock. Unfortunately, the reality of my home studio is much more mundane…

This screengrab shows the tools I used to make the clock sound much better: Compression and Reverb:

I had tried to use ChatGPT to help with the lyrics and the title, but none of its suggestions worked in any way. I also used MidJourney AI to generate some poignant artwork, but none of that really fit either. Top left works best, top-right spoiled by hovering suitcase, bottom-left too old, bottom-right too young.

None of the versions in this generation work, although bottom left could inspire another iteration, where the girl is standing looking into the house and its raining indoors!