How to Use AI as a Creator
Use AI to augment your creative process
Hey crew! I have been using generative AI to streamline my workflow and creative process, and to improve my quality output.
In today’s issue, I am going to cover the best ways I have found to leverage AI as a creator, that can hopefully be helpful to integrate into your creative workflow.
Ideating and Scripting
This is perhaps where AI has been most helpful and impactful in the creative process. Whether you use ChatGPT, Bard, Claude or others – being able to go back and forth with an LLM to help craft the perfect script or copy is incredibly useful.
We’ve intergrated GPT-4 in Eluna so you can use it in conjunction with the ideation, writing and media generation tools. It is where I craft all of my scripts for my content.
There are many examples of how to leverage an LLM to write a great script or tell a great story. Here are 20 great prompts that my friend Eder posted to help craft a very retentive and unique story.
AI Video Generation
Runway’s text-to-video and image-to-video tools are my favorite.
For video-to-video, I prefer Kaiber’s intuitive UI and quality output.
Here’s a video of me using Runway to create a movie trailer.
We also offer a very unique video product on Eluna we call Motion Blend, built using Stable Diffusion Deforum. Using the iOS app, you upload an image or video (or use plain text) to create custom and compelling videos, like the one I just created here.
In my opinion, the best AI avatar generator is HeyGen (slightly over Synthesia) – which can generate an unbelievably realistic avatar version of you.
I covered the software and its implications in the below reel:
One of the AI tools I use most is the generative fill available in the Photoshop Beta. Add details to a scene, expand a shot, and more – all while maintaining the image style and aesthetic.
Here’s the video I made showing it in action.
AI-powered captions are one of the AI tools I use most. All of the major editors now have it integrated – Premiere, Da Vinci Resolve, CapCut, etc.
I made a thread detailing how I create my captions here.
Da Vinci Resolve (which is free) just added an AI-powered relight feature which is very useful as you can make lighting adjustments to a scene without having to reshoot.
Generative AI can dramatically decrease editing time for podcasters:
Help with ideating
Automatically edit a multi-camera, multi-microphone podcast
Automatically create viral shorts from your long-form video
Autopod for Premiere Pro will automatically edit multi-camera sequences for you. It will save you hours of time in your post-production process.
There are also a lot of great AI-powered tools that will automatically cut up your long-form content into bitesized, short-form clips.
Opus is a great example. I’ve tried it and recommend it. They use AI analyze and pick the gold nuggets from different parts of your video, and seamlessly rearrange them into viral short clips that stand on their own.
See a demo of it in action here.
Speaking of podcasting, Adobe has a great product suite dedicated to it.
The best feature is the audio enhancer. It works pretty fantastically. You can even take iPhone audio from a relatively noisy room and clean it up pretty well to sound like it was recorded in a professional studio. It’s not perfect of course, and sometimes results in weird octave changes, but give it a try to clean up poor audio.
You can also generate voice narration, music and sound for your content as well.
For faceless media content, this is an easy way to create high-quality narration with a strong variety in voice, delivery and cadence styles.
It’s also a fantastic way to dub over a piece of content with your voice even if you don’t have access to professional audio equipment.
We’ve built-in Meta’s new music generator with an easy-to-use UI in Eluna.
This post explains how to use it. It’s super intuitive and is a great solution for achieving completely unique instrumentals for your content.
3D Modeling & NeRFs
Luma Labs is one of my favorite AI tools on the market. The app is incredible and easy to use. You can create a 3D model of any object or terrain (using a drone) with a simple capture system or by uploading a video directly from your phone.
This is a lot of fun to get creative with. You can use it for unique transitions (like I did in this video) and for other compelling visuals.
As an example, say you’re making a real estate video highlighting a home for sale. You could easily create a NeRF (neural radiance field) of the exterior and interior of the home to show it off in full detail and with context.
NeRFs unlock a lot of visual techniques that were very difficult to achieve before and required very specific and expensive equipment.. Here’s KarenX showing off a NeRF dollyzoom effect, as an example.
Here are a couple screenshots of me using Luma to capture a NeRF of my Model Y.
Runway also has a great 3D to texture tool, that you can then use to generate a mesh for your 3D projects.
VFX and CGI
AI is dramatically improving the ability for creators to integrate VFX into their content. Wonder Studio by Wonder Dynamics is an incredible tool that that automatically animates, lights and composes CG characters into a live-action scene. No need for expensive and complicated motion capture rigs.
Here is the video I made showing it in action.
Another very promising tool (currently in beta and have not tried it) is Simulon.
🔦 Community Spotlight
In this section, I will be highlighting people who have been kind enough to share this newsletter.
☝️ Want to be featured here? Share this newsletter with your friends/audience (your referral link is at the bottom of this email) and I will include you, and whatever it is you’d like to share in a future issue of The Signal.
🎁 Share the Signal
When you refer new readers to this newsletter, you earn rewards (and soon, merch!)
Note: please do not sure fake email addresses – they will not quality as referrals. Thank you!