Back to Articles
Jan 17, 20261 month ago

How-to Guide for Viral AI Character Swaps✨

JM
Justine Moore@venturetwins

AI Summary

This article is a practical guide for creating viral AI-powered character swap videos. It outlines a recommended workflow using the Kling 2.6 Motion Control model, involving selecting a reference video, editing its first frame with an image model, and then processing it through Kling, with optional voice alteration. It also mentions alternative models like Wan 2.2 Animate, Luma Ray 3, and Runway Gen-4, noting their different strengths and limitations for video editing and character replacement.

Want to make one of the viral AI character swaps that are blowing up right now? It's easier than you might think! You no longer need to run this kind of video-to-video model on your local machine - there are cheap tools to do it in the browser.

I've spent hours figuring out the perfect workflow for these videos. In this post, I'll walk through your options for models to use, how to prompt them, and things to keep in mind to get a successful result.

Let's get started with the most popular model 👇

Kling 2.6 Motion Control

Kling (@Kling_ai) is the 👑 right now for AI character swaps. You can upload a reference video of up to 30 seconds, and it handles some pretty complex motions! I'd recommend the below workflow:

1) Find the reference video. You want a video with a clear shot of a SINGLE character, and their entire upper body (or full body) should be visible. You can record your own or pull one from the Internet (there are reference animation videos on YouTube, and plenty of clips on places like TikTok).

2) Transform the first frame. Screenshot or extract the first frame and take it to an image edit model like Nano Banana Pro or Flux 2 (I use both on Krea - @krea_ai). Prompt your desired changes. You can change both the character and the background, but make sure you keep the character's position the same. E.g. "replace the man in image 1 with the woman in image 2 in exactly the same pose."

3) Take the video and new first frame to Kling. You can run the model on Kling's product, or somewhere that hosts the model (like Krea or Fal). I don't put anything in the prompt box, the reference video + starting image covers it. If you run the model on Kling, make sure you check the "character orientation matches video" box.

4) Change the voice (if needed) with ElevenLabs. The model will keep the audio that you upload in the reference video - which doesn't work if you're changing the age, ethnicity, or gender of the character. You can upload your video (or just the audio) to ElevenLabs' voice changer. It will keep the delivery and timing but change the voice itself!

Pro tip - most people don't yet realize that you can get more complex full-body movement with Kling Motion Control. But it can be really good. Something to explore further if you want to stand out.

If you're interested in testing alternative models or workflows, keep reading!

Wan 2.2 Animate

Wan 2.2 Animate is a slightly older model but you can still get strong character swap results. With this model, I've found it even more important that the new start frame is "clean" (no extra people, and the proportions match the reference video exactly).

The workflow is the same as what I described above, and you can also run it on either Krea or Fal. But it's important to know that there are two models here, so pick the right one for your use case:

Move - applies the character motion from a video into a new scene (you'd use this if you want to change the character AND the background / environment).

Replace - swaps out only the character in a video while keeping the rest of the scene consistent.

Other options

If you're interested in video editing more broadly, two other models to check out are Luma Ray 3 (select "Modify") and Runway Gen-4 (select "Aleph").

The clip lengths are a bit more limited - 10 seconds for Luma, and 5 seconds for Runway. And I've found that for close-up shots of a person talking, the lip sync typically isn't as good as Kling and Wan.

However - these models are definitely worth a look if you want to do anything beyond a character swap or if you don't need much speech in the video. The clip below has some great examples of what Luma Ray 3 Modify can do.

Conclusion

Thanks for reading! As you can probably tell, I'm really bullish on AI character swaps and motion transfer as a more controllable way to generate video. Today, we have people turning themselves into influencers - in the future, it's hard to see how this doesn't transform all types of filmed content (e.g. ads, TV, and movies).

If you use this tutorial or have an alternative workflow, please share your results ⬇️! I'd love to see what others are making.

And tagging the companies mentioned in this post in case you want to check them out further: @Kling_ai, @krea_ai, @fal, @Alibaba_Wan, @LumaLabsAI, @runwayml

By
JMJustine Moore