Main

Mastering Parody and Meme AI Animation: Step by Step ( パロディとミームAIアニメーションのマスタリング:ステップ・バイ・ステップ )

Mastering Parody and Meme AI Animation: Step by Step ( パロディとミームAIアニメーションのマスタリング:ステップ・バイ・ステップ ) In this tutorial, you'll dive into the intricate workflow I use to craft AI animation videos, featuring highly detailed footage for a 2D SD character parody animation. While creating realistic and standard anime videos may appear simpler, this SD character animation serves as an excellent foundation. By grasping the techniques shared in this tutorial, you'll be well-equipped to recreate many of the videos showcased on my TikTok and YouTube channels. このチュートリアルでは、私が使っている詳細な動画制作プロセスについて詳しく学びます。主に2D SDキャラクターのパロディアニメーション向けの高度な映像を扱います。現実的なアニメーションや通常のアニメ映像を制作することは簡単に見えるかもしれませんが、このSDキャラクターアニメーションは優れた基礎です。このチュートリアルで共有されるテクニックを理解することで、私のTikTokとYouTubeチャンネルで見られる多くの動画を再現する準備が整います。 -------- To accommodate our diverse community, I've decided to strike a balance between visual and voice-over content. Given that a majority of our subscribers (over 80%) are not native English speakers, I've focused on creating a comprehensive workflow that explains the process in detail. This way, you can follow along with the video tutorial using the provided workflow, saving us a significant amount of time in video production. A fully detailed voice-over or subtitles would have taken a week or more to complete for this initial tutorial. More to Come: Rest assured, this is just the beginning. I'll be creating additional videos to cover any missing parts that our community requests in the future, ensuring we make the most efficient use of our time and resources. 私たちの多様なコミュニティに対応するために、ビジュアルと音声オーバーコンテンツのバランスを取ることにしました。私たちのサブスクライバーの大多数(80%以上)が英語を母国語としないことを考慮して、プロセスを詳細に説明する包括的なワークフローの作成に重点を置いています。この方法で、提供されたワークフローを使用してビデオチュートリアルに従うことができます。完全な詳細な音声オーバーや字幕を使用すると、この最初のチュートリアルを制作するのに1週間以上かかっていたでしょう。 これからもたくさんのコンテンツが登場します:安心してください、これは始まりに過ぎません。将来のコミュニティのリクエストに応じて、不足している部分をカバーする追加のビデオを制作します。時間とリソースを最も効率的に活用することを確実にするためです。 -------- CivitAI website for base AI models and specialize model (Character and others) : https://civitai.com/login?ref_code=BEA-MVU -------- Stable Diffusion Cloud Service: RunDiffusion Website : https://rundiffusion.com/?ref=vivid59 (my affiliate link) RunDiffusion 15% Discount Code (割引コード) : beautyinbits15 -------- My amazon affiliate links (私のAmazonアフィリエイトリンク): 私のAmazon JPでのグラフィックカード : https://amzn.to/45rNNcb My Graphic card on Amazon US : https://amzn.to/45qa5Lu Amazon JPでグラフィックカードを購入 : https://amzn.to/45uVgqK Buy Graphic card on Amazon US : https://amzn.to/3QcImtf Download my workflow and other files for this tutorial for free (このチュートリアルのワークフローとその他のファイルを無料でダウンロード) : https://ko-fi.com/s/5becd41420 -------- Video chapters 00:00:00 Introduction 00:00:25 Computer spec requirement 00:02:26 Installation guide 00:02:47 Installation step 1 follow installation guide 00:02:59 Installation step 1.1 (optional) download source code 00:03:23 Installation step 1.2 (optional) run webui.bat 00:03:34 Installation step 2 install all the extensions 00:07:42 Installation step 3 download AI models for ControlNet extension 00:09:46 Installation step 3.1 download TemporalNet AI models for ControlNet extension 00:10:45 Installation step 4 download based model 00:11:45 Installation step 4.1 download specialize model 00:12:42 Installation step 4.2 (optional) download CivitAI extension 00:13:06 Installation step 4.3 (optional) download embeding 00:13:34 Installation step 5 set multi controlnet number 00:14:07 Installation step 5.1 (optional) my other settings 00:14:18 Extension required for this tutorial 00:14:28 How to watch this tutorial effectively 00:15:14 Tutorial begin , Step 0 : background extraction 00:19:03 Step1 : Extract video frame 00:19:47 Step2 : Extract video key frame 00:20:35 Step 2.1 ( Optional ) : Please review all the keyframes 00:21:23 Step 2.2 ( Optional ) : seperate a bad reference images 00:24:34 Step 2.3 ( Optional ) : organize a folder 00:28:54 Step 2.4 ( Optional ) : photo edit / redraw the potential problem reference images 00:36:12 Step 2.5 ( Optional ) : finish categorize 00:36:34 Step 2.6-2.7: find a general prompt that works with majority of the image / poses 00:38:32 Step 2.8-3: check all the setting and start batch image generation 00:39:44 Step 4.1 ( Optional ) : Manually edit the bad generated images 01:02:20 Step 4.1.1 ( Optional ) : try photo edit the ref image by adding an accessory 01:18:38 Step 4.1.2 ( Optional ) : try inpaint generate some small part 01:22:01 Step 4.1.3 ( Optional ) : upscale 01:22:17 Step 5: create interpolation frames (Ebsync stage 5) 01:23:34 Step 6: Generate Video by Combining Interpolation Frames (Ebsync stage 7) 01:25:00 Step 7 ( Optional ) : Apply FlowFrames to increase video's FPS I have opened a temporary discord to help answering the problems you don't understand. I don't have much free time but I'll try to help you as much as I can : https://discord.gg/gzUcJ4udbX

Beauty in Bits (BBits) | AI experimental Journey

4 months ago

Hello everyone today. I'll show you how to make Cool A I animations like the ones on my channel including Klee Loli Kami and others. There are six steps but you can skip some for now. I won't talk too much. But remember you can download all the detailed instructions for free in the video description. So let's begin our fun journey into the world of A I animation ready. Let's start before we start the installation, I want to talk about your GPU. If your GPU has less than eight GB of VRAM, it migh
t not handle this tutorial well. (However, there's a cloud GPU option, so don't lose hope!) I recommend a GPU with at least 12 GB VRAM for better results. Even with my powerful RTX 4070 TI with 12 GB VRAM. It takes over 1.5 minutes to generate each image for a 10 seconds video. I generate around 200 to 300 images taking 6 to 15 hours for a 10 to 20 seconds clip. If you don't have a powerful GPU, you have options. You can use a cloud GPU like Google Collab, but it's a bit complicated. Another opt
ion is RunDiffusion, a user friendly cloud service. You can set it up in 10 to 15 minutes. The starting package is 99 cents per hour and it's 1.5 to 3 times faster than my RTX 4070 TI. If you're serious, the monthly Creator's Club package is $35.99. But here's the exciting part. I have a reference link with a discount coupon for 15% off. So it will be $35.99 minus 15% which is $30.59. This package gives you more storage and features and you get a $10 bonus, reducing the cost to $20.59. By the wa
y, if you use my affiliate link to purchase this monthly package, I'll also receive a small commission. And if you plan to buy a new GPU, you can support our channel by using my Amazon affiliate link. It helps our channel grow and doesn't cost you anything extra. Thanks for your support. Now, let's start the installation guide. Welcome to the stable diffusion and extensions. Installation guide. In this brief overview, we'll cover the essential steps for installing stable diffusion and its extens
ions as well as configuring some settings that we'll use in this tutorial. If you're already experienced with these installations, feel free to skip ahead. Step one, install stable diffusion by following the instruction on their github. Step 1.1 optional if you followed the instruction and failed, try downloading the source code and extract it manually. Step 1.2 optional run webui.bat Wait until the installation is finished. If you've successfully installed it, you should run Webui-user.bat next
time. These are my settings for NVIDIA cards to optimize performance. Step two, install all the extensions that required for this tutorial. You can download the list of my extensions in the workflow and files, free download item on my Ko-fi page step three. Download A I models for control net extension. You should download all the PTH files except for the ones containing 'XL'. These are for stable diffusion XL which is not needed in this tutorial step 3.1 download TemporalNet model for control
net extension step four download based model for image generation checkpoint step 4.1 download specialized model for a specific character, image generation. LoRA step 4.2 optional download, CivitAI extension for LoRA thumbnail and more. Step 4.3 optional download and beating for hand generation improvement step five, set multi control net number set at 5 to 7 step 5.1 optional my other settings to set noise multiplier for img2img to 0. You need to modify some files. I'll explain in the future vi
deo. Great job you finished installing Stable diffusion and its extensions. That's it for our basic guide. If you have more questions, feel free to leave a comment before we dive into the tutorial, allow me to provide a quick explanation on how to watch it effectively. One, download my workflow in the video description two at the bottom of each scene, there is a step that we are in indicator with two color yellow is an important part that you cannot miss white is an optional part that you can wa
tch later. If you want to know the detail of how I generate each different type of image, you can come back and watch later after you understand the big picture of the workflow. These are the extensions that you need for this tutorial. This is how you should set the control net unit number. OK. Now that you understand how to watch my tutorial, let's begin A I animation guide step zero, optional remove background from the reference video step one, extract all video frames, Ebsynth stage one, you
don't need the video_mask for this tutorial. You can skip it by editing the Python script. Although it's a bit complex to explain alternatively, you can close the stable diffusion terminal after it has extracted all the video frames and then rerun the app step two extract key frames, sync stage two, I usually extract at least half of the video frames but in some cases, one third is sufficient for my PC generating 200 -300 images for a 10 to 20 seconds video takes about 6 to 10 hours which is acc
eptable to me. Step 2.1 optional, please review all the key frames and identify any potential images that might lead to distorted or poorly generated visuals. Step 2.2 optional, separate a bad reference images from a video_key folder for retouching. Step 2.3 optional organize a folder of reference images. For images need to be fixed. Step 2.4 optional photo edit redraw the potential problem reference images step 2.5 optional finish categorize all potential problem images category step 2.6 find a
general prompt that works with majority of the image poses. Step 2.7 optional test, the prompt with a sampler of each unique pose. This step involves testing different prompts to find the best one for various poses. Reducing the number of poorly generated images in batch image generation. Step 2.8 check all the setting and place the first generated keyframe that you like in TemporalNet. Step three, start batch image generation. This step is crucial you need to ensure that all ControlNet setting
s are in batch mode except for TemporalNet which uses single image mode with the loop back option. It's important to keep the seed fixed after clicking generate, make sure to wait until at least two or three images have been generated before leaving your PC. Any mistakes could result in wasting 6 to 7 hours of work or even more step 4.1 optional, manually edit the bad generated images or generate the separate difficult to generate key frames that have not been generated yet. This is an optional
step but it can significantly improve the video's quality by addressing poorly generated frames. Many A I animation videos skip this step for the Klee Loli Kami character. It took me more than 12 hours to complete. It involves photo editing, adjusting ControlNet parameters and manually adjusting Openpose bones step 4.1.1 optional, try photo edit the ref image by adding an accessory. That's important for the character. You want to make it easier to generate step 4.1.2 optional try in-paint genera
te some small part that you think it's important such as a hand step 4.1.3 optional upscale img2img_key and video_frames to get an upscale video step five, create interpolation frames, Ebsynth stage five, download app from https://ebsynth.com ,drag and drop each .EBS file into the app and click run all. If you haven't extracted the video_mask, please turn off the mask option step six, generate video by combining interpolation frames. Ebsynth stage seven, simply input the project directory and th
e path to the original video. Select Ebsynth stage seven and then click the generate button. I want to mention that I don't generate income from TikTok videos or YouTube Shorts. If you appreciate the content and wish to support my channel. You can do so by watching my longer videos like this tutorial on my youtube channel. You can also show your support by making a donation on my ko-fi page. Additionally, I accept various cryptocurrencies like BTC, ETH including layer two solutions like OP, ARB,
ZK or others Doge BNB XRP or any other cryptocurrencies. You prefer, your support goes a long way in helping me create more valuable content. Step seven optional, apply flow frames to increase videos, FPS, drag and drop the video file into the app, select RIFE(NCNN), set the FPS and speed and click interpolate. Thank you so much for watching my first video on A I animation tutorial. I'm sorry if I couldn't explain everything perfectly this time. It's my first try after all, but I promise to get
better in the future if you have any questions or need more help. Just leave a comment below. I'll try to answer your questions or make a new video to help you out. Let's keep learning and improving together. Thanks for being here with me. See you in the next video.

Comments

@Mart_J

💮 I admire and love the creativity and effort in making these wonderful videos!! 💮

@josemeruvia7442

Muchas gracias por tu video tutorial, ya instale Stable diffusion, pero hay mucho que aprender en especial manejar los fotogramas y unir todo. voy a realizar todos los pasos que enseñaste con tu primer video. y espero que sigas subiendo videos tutoriales

@user-ez5mm1he4g

This is my first time complimenting a youtube video and damnnn you have nailed it this is best tutorial ever and the resources and the way of managing them and providing them for free you have just dropped best video ever i have liked and subscribed you even tho i may not watch future vidoes because i only need this tutorial to make a video for my client but thank you alot for this amazing tutorial!!

@datSilencer

This is AWESOME AF 😎😎💪💪thanks!!

@beautyinbits_official

I forgot to add video chapters, but I have added them already. However, sometimes the chapters just disappear. I'm trying to make sure it won't happen again in the future. If you know why sometimes the chapters on the video disappear, please let me know. I think it might be related to description editing. I just found out that I have been setting the controlnet incorrectly after over 100 video generations😂. I had thought that it was necessary to load a new controlnet model every time we changed the reference image. In reality, we can cache by set the Model cache size to X( X= The total amount of controlnet unit you are using ). Normally, I would have to load 4-5 controlnet models before starting the image generation process. This used to take 40 seconds to 1 minute for the controlnet part and 30-40 seconds for the image generation part, making my average time to generate one frame around 1.5 minutes. Now, the controlnet part is reduced to less than 10 seconds. So, I've reduced the time needed for 200 images with 4 controlnets from 5-6 hours to only 1.5 hours😍. I've updated the setting image on my Ko-fi, but I haven't figure out how to update it in the video yet. The cache size is related to how many controlnet models you are using. I typically use around 4-5, so I've set it to 5. I'm still unsure if there will be any negative issues from setting it high. If anyone knows, please let me know.🙏 I have opened a temporary discord to help answering the problems you don't understand. I don't have much free time but I'll try to help you as much as I can : https://discord.gg/gzUcJ4udbX Feel free to leave a question in the comment! 00:00:00 Introduction 00:00:25 Computer spec requirement 00:02:26 Installation guide 00:02:47 Installation step 1 follow installation guide 00:02:59 Installation step 1.1 (optional) download source code 00:03:23 Installation step 1.2 (optional) run webui.bat 00:03:34 Installation step 2 install all the extensions 00:07:42 Installation step 3 download AI models for ControlNet extension 00:09:46 Installation step 3.1 download TemporalNet AI models for ControlNet extension 00:10:45 Installation step 4 download based model 00:11:45 Installation step 4.1 download specialize model 00:12:42 Installation step 4.2 (optional) download CivitAI extension 00:13:06 Installation step 4.3 (optional) download embeding 00:13:34 Installation step 5 set multi controlnet number 00:14:07 Installation step 5.1 (optional) my other settings 00:14:18 Extension required for this tutorial 00:14:28 How to watch this tutorial effectively 00:15:14 Tutorial begin , Step 0 : background extraction 00:19:03 Step1 : Extract video frame 00:19:47 Step2 : Extract video key frame 00:20:35 Step 2.1 ( Optional ) : Please review all the keyframes 00:21:23 Step 2.2 ( Optional ) : seperate a bad reference images 00:24:34 Step 2.3 ( Optional ) : organize a folder 00:28:54 Step 2.4 ( Optional ) : photo edit / redraw the potential problem reference images 00:36:12 Step 2.5 ( Optional ) : finish categorize 00:36:34 Step 2.6-2.7: find a general prompt that works with majority of the image / poses 00:38:32 Step 2.8-3: check all the setting and start batch image generation 00:39:44 Step 4.1 ( Optional ) : Manually edit the bad generated images 01:02:20 Step 4.1.1 ( Optional ) : try photo edit the ref image by adding an accessory 01:18:38 Step 4.1.2 ( Optional ) : try inpaint generate some small part 01:22:01 Step 4.1.3 ( Optional ) : upscale 01:22:17 Step 5: create interpolation frames (Ebsync stage 5) 01:23:34 Step 6: Generate Video by Combining Interpolation Frames (Ebsync stage 7) 01:25:00 Step 7 ( Optional ) : Apply FlowFrames to increase video's FPS

@jjohan40

Many thanks for sharing your work with all of us! However it needs a powerfull PC, we have no choice but wait for an optimization I believe. God bless you sweetie! 🤗⚘🙏

@0bn0unc3

this is very interesting, thank you!

@DioSatyaloka

Finally!!! Thank you!!

@zildam.d.lannes2483

Loved it ! 🥰👍

@Obsidian_Alpha

Thanks for the information ✌🏼

@ambujmishra9149

This was cool, good job. My vid2vid flow is quite similar as well, but I did pick up some neat tricks. One note though, RIFE is better than flowframes, atleast from what I have worked on.

@accelerator9145

Thanks for the tutorial, i just happened to find randomly your channel and now interested in doing some Ai short in the future. Btw currently picking part for my new and I'm now considering taking a 4070ti (1060 actual pc), do your mind sharing your full PC specs?

@Zeronaito8951

はい 神

@GamuHunt

RTX 4090 😳 Btw, This is a very good video tutorial :3 Thankyouu^^

@krystalmae5557

Goddamn i didn't know it was this complicated to make an ai video, i just thought an app does it for you wow

@AIrtist-xk2dl

👌♥️🙏

@user-zi7zr2sc9c

すごいな

@josepplanas6691

thank you very much for the tutorial. I have followed the tutorial but I had a problem: when it finishes generating the "batch" no image appears in the folder that I put as output, only some images appear in the preview. Do you know why this happens to me or how I can solve it? Thank you very much and I hope you post more tutorials.

@Raizen-id

I'll be honest, i just enjoying watching your videos than try to make it myself. 😅