• The Signal
  • Posts
  • How I Create AI Visuals for Better Videos

How I Create AI Visuals for Better Videos

Here's my process for generating AI visuals in context to the story being told

Welcome back to The Signal, a weekly letter where I share stories, trends, strategies and insights to help you level up as a creator or entrepreneur and win on the internet.

Today’s topics:

  • How to use AI to tastefully create visuals for your videos

  • My workflow and process for making videos

How to Create Tasteful AI Visuals for Better Videos

I use AI to create images and videos that help tell stories, and the content always performs really well.

The reason why, is that AI allows you to create very compelling visuals that are perfectly contextual to the story being told.

This beats stock footage every time.

Here’s how I do it.

Step 1: Gathering Inspiration

So the first thing we need to do, is to figure out the style of visuals we want to create.

When I make my cinematic editorial videos, I often try to build a ‘world’ around them. The music, fonts, visuals, etc all need to fit cohesively into the micro-universe of that specific video.

So to start, I’m generally looking for visuals I’ve been inspired by lately.

For example, check out this tweet by Techbimbo. This lo-res party girl aesthetic is trending right now, and I think it looks great.

Let’s use it.

Step 2: Prompting Process

Okay, now that we know how we want the visuals to “feel,” we are ready to prompt.

Now, we can evoke our inner art director and design the perfect prompt to match the visuals we’re after… or we can use an LLM – a much easier and more effective route.

Let’s simply upload the visual reference to ChatGPT and ask it to give us a text-to-image prompt for this lo-res party girl aesthetic.

Voila. It even gives us some nice optional add-ons to try.

So now, we bring it into Freepik.

I use Freepik because it has every model available, and in my opinion, the UI/UX is the best-in-class.

Now, we can just enter the above prompt as is… not bad:

OR – we can upload our visual reference as a STYLE reference:

Much better.

Now, we want to create a ton of other visuals from the same world.

We go back to ChatGPT and ask to give us many more prompts of different people and shots from around the party.

It gives us endless prompts to copy and paste to build a cohesive lo-res party universe.

That’s dope.

Now we have a ton of visuals that feel as if they were shot from the same camera roll.

Step 3: Image to Video

Now it’s time to animate the images.

In Freepik, simply hover over the image you want to turn into a video and click on "Video.”

Here we are met with a ton of different video models to choose from.

I generally just use “AUTO” because Freepik seems to have a magic sauce that chooses the best model based on the image and prompt.

But different models do different things well. Some have audio, some don’t.

When I specify, I mostly use Kling 2.5 or Veo 3.1 at the moment, but you want to experiment as much as possible here.

Speaking of a prompt, you can leave it blank and let Freepik’s AI choose for you, or you can enter your own.

Again, if you’re not a natural prompter, tell ChatGPT exactly what you want to see, and ask it for an image-to-video to make it happen.

And there we go. We now have videos to insert into our content.

You can run this playbook for every video you make.

Now you see why I always say that TASTE is the moat in the next era, as the creation process becomes commoditized and the barriers to creation are removed.

Bonus: Animated Text

Another really cool thing that I sometimes use AI for is to animate text. This works well for titles and captions.

For example, in my recent Mike Tyson video, the word “PAIN” was animated at the 0:10 mark.

I wanted tears to drop down from the word. I could have spent hours in After Effects to achieve this, but instead I did it in seconds with AI.

I created a static image of the word PAIN in red using “las valles” font in Photoshop, brought it into Freepik, selected Veo 3 as the model, and simlpy typed “tears of pain drip out of the logo, which has a wet appearance” as the prompt.

My Process for Making my Videos

I shared my entire workflow for making my bread-and-butter video format – from ideation to scripting to editing to deployment – on my friend Greg’s podcast.

Check it out:

Things I’m Loving Lately

This section includes products, services, creators and more that I'm loving lately. Everything below is organic and non-sponsored unless indicated.

Brynn just launched Board, a digital/physical board game hybrid which looks like such a fun product. I am surprised no one had built this yet.

My friends at 1X made their flagship robot, Neo, available for pre-order. Here is my video covering the news. I plan on ordering for the novelty of having it in my upcoming studio and featuring it in my videos.

I don’t expect it to be clearly useful for the average person for ~5 years.

Clearly useful and affordable enough for critical mass adoption… 10+

But still an incredible milestone – the culmination of decades of research and scientific breakthroughs.

Quick Hits

This section includes things I have found interesting and helpful this week.

 Meta just launched Meta AI auto-translations for reels. I think this is a massive update. Here is the video I made in partnership with Meta detailing the feature.

 Cursor just released Cursor 2.0. I haven’t tried it, but am curious as to how it compares to (the current goat) Claude Code.

 Google introduced Pomelli, a tool to help you generate infinite, on-brand content. Here’s my video on it.

Roberto Nickson