How to Create Engaging AI-Generated Storytelling Videos $1000 A Day
Okay, so I created a short video using AI. A long, long time ago in Egypt, there was a baby named Moses.
He was born at a time when things were hard for his people.
All baby boys from the Israelites must be taken away.
“Where’s the baby? Bring him out right now! I want him now!”
“I won’t let that happen to my baby. I have a brave plan.
I’ll place Moses among the reeds along the Nile River and see what happens.”
“What a cute baby! I will take him home.”
This entire video was created with AI tools.
And the best part is that videos like this are great for storytelling.
Take a look at this channel that uses AI to create Bible stories, raking in hundreds of thousands of views per video.
But it doesn’t stop there.
This approach works wonders for any kind of story imaginable.
So, in this Post, I’m going to show you exactly how you can tell engaging stories like these using AI tools.
But before we get into that, thank you for 1,000+ subscribers.
Your support has been incredible.
Now, let’s continue with the video.
Generate Script with ChatGPT
First things first, let’s create a script for our video.
For this demonstration, I’m going to use this video as a template.
So, let’s quickly extract the transcript using the YouTube summary with ChatGPT and Claude's Google Chrome extension.
As you can see, the extension has given me a full transcript of the video.
So, I’ll go ahead and copy it to generate our script. I’ll ask ChatGPT to create a story of Moses using the transcript.
Next, we’ll break down our story into different character lines to make narrating easier.
We’ll also include prompts to generate close-up images for each scene.
Create Consistent Characters with DALL-E
With our prompts ready, it’s time to bring our characters to life.
Consistency is key, so let’s begin by creating detailed descriptions of our characters.
Let’s kick things off with the Pharaoh.
Now, I’ll have ChatGPT generate a description for an ancient Pharaoh and create a prompt based on that description.
Remember, free users can’t generate images with ChatGPT, so I’ll demonstrate this separately for plus users.
We’ll search for the consistent character GPT and click this button.
Now, we will follow the prompt, add a name for our character, and also paste the description we copied.
As you can see, we’ve generated an initial image of our Pharaoh.
So, we can go ahead and include some actions and background details.
Our new Pharaoh appears more muscular than the first attempt, so let’s try again.
This new image looks similar to the previous one, so let’s try one more.
Looks like the original image was the only slightly odd one, but the others maintain their consistency.
Create Consistent Characters with Copilot
To save time, I will skip the other characters and scenes for free users.
I’m going to show you a simple trick to get consistent characters from DALL-E. Let’s start by creating a consistent character for Moses’s mother.
First, we need to craft a detailed description of her.
We’ll follow a similar process as we did for the Pharaoh earlier.
Since GPT-4 is available for free, let’s proceed with selecting the same reliable character GPT we used earlier.
The first thing we’re going to do here is to paste this command.
I will include it in the video description.
It’s a simple instruction that also prevents the GPT from attempting to create an image, which isn’t possible on the free plan.
We can follow a similar process as before and paste our character description.
As you can see, the GPT has generated a prompt for us to use.
Now, let’s open Microsoft Copilot and paste our new prompt into the designer.
Our initial character has been generated. Next, let’s refine her with an action using prompts from GPT.
As you can see, our updated character closely resembles the original.
We’ll continue this process to ensure consistency across all scenes featuring our character.
As you can see, Copilot has generated images in a square aspect ratio to fit our needs.
We’ll switch over to Adobe Firefly.
Using the generative fill feature, we’ll upload our image and then expand it to widescreen dimensions.
Firefly adds a logo at the bottom, so we’ll leave room for that before we generate our image.
Our image looks good, so let’s download one and import it into Adobe Image Resizer.
Let’s select YouTube dimensions and drag our image, making sure that the Firefly logo is no longer visible.
Now we can go ahead and download our image.
Generate Voiceover with ElevenLabs
The next step is to generate our voiceovers.
We’ll start by copying the character lines we prepared earlier and paste them into ElevenLabs.
Our first narration is that of our narrator, so let’s choose a voice from the library.
To find the right voice for your narration, check out my full video where I go over the 2,000+ voices in the library to select my favorites.
Let’s select a character and generate our voiceover.
All that is left is to repeat this process for our other characters.
Animate Images with Luma Dream Machine
Now that our voiceovers are set, we can go ahead and animate our images.
Depending on how you want your video to look, I can recommend two main tools.
The first one is Luma Dream Machine, and the second is Imersity AI.
This AI tool can enhance your images with depth, creating a 3D video effect like this example.
While it doesn’t animate characters directly and free users may have watermarks on their videos, it can still produce impressive results like this zoom effect.
The primary tool we’re going to be focusing on is Luma Dream Machine.
It’s a freemium application that gives users 30 free generations each month, but this AI produces some of the best results I’ve seen on the market.
Take a look at this video I generated.
It’s almost lifelike, despite a few minor glitches in the hand, and this was only my second attempt.
Just imagine the potential with a bit more refinement.
Let’s go ahead and upload this image of a guard barging into someone’s home.
A quick tip: for better results, you should always include a version of the prompt that you used to generate the image, in addition to the actions that you want the character to perform.
The generation can also be very slow sometimes as the tool is in high demand at the moment.
However, I found that traffic tends to be lower around midnight Eastern Time, making it an ideal window for quicker processing.
But if you have the budget, I can say it’s worth every penny.
Edit Video in CapCut
Our video doesn’t look bad, so we can proceed to the next step.
But before that, if any of the videos look like this clip of our narrator, we can drop it into CapCut and trim the unwanted section.
Then we’ll duplicate the clip and apply a reverse effect to seamlessly extend its duration without disrupting the flow.
The next step is to create the lip sync for our videos.
For paid tools, I recommend a lip sync tool called Sync Labs.
Let’s create a project and upload our audio and video files.
We can select our preferred model and then generate our lip-synced video.
“A long, long time ago in Egypt, there was a baby named Moses.”
Our character’s lips are moving quite well, but it’s a pain to get rid of the huge watermark, so let’s use another tool for our lip sync.
The tool we’ll be using is Pika. So let’s head over to their website and upload our video.
We will select the lip-sync option and upload our audio file to start generating the lip-synced video.
“There was a baby named Moses.”
Although there’s a PAA logo at the bottom, we can address that with blurring in post-production.
Final Editing in CapCut
It’s time to start our final editing in CapCut.
Let’s drag our video clips and audio into our timeline and arrange them according to the flow of our story.
We will also add some transitions and sound effects to make the overall video engaging.
Next, we can generate the caption and add some animations and effects, or simply save time with one of these cool templates.
The P watermark is still showing on this clip, so let’s duplicate it, add a blur effect to the top layer, and finally add a mask to it.
This doesn’t get rid of the watermark, but it can blur the region.
For background music, let’s grab one from YouTube’s audio library to avoid any copyright issues.
We’ll add it to our timeline and decrease the volume.
The video looks good, so we can go ahead and export it in 720p.
Increase Resolution with TensorPic
The last step is to increase the resolution of our video. For that, we’ll go to TensorPic and add our exported video.
Next, let’s select 400% Ultra and enhance our video. Our 4K resolution video is now ready for download.
You can achieve even better results than this if you spend a little more time refining your stories and visuals.
And as you can see, this doesn’t only apply to paid users.
That said, free users will need a lot of patience with Luma Dream Machine as generations can sometimes take far longer than you can imagine.
Conclusion
Anyway, I hope you found this post helpful. If you did, Your thoughts matter too, so feel free to leave a comment below. Thank you, and see you in the next Post.