Recommened Episodes

A Proper Guide to Installing AnimateDiff-lightning Video Generation on Windows 11

This process involves several steps: setting up the environment, installing the ComfyUI interface, downloading the required AI models, and finally, generating a video.

Installation Quick-Check

  • Ensure your PC meets the hardware and software prerequisites.
  • Install the portable version of ComfyUI and the ComfyUI-Manager.
  • Install the "AnimateDiff-Evolved" custom node via the manager.
  • Download the required Checkpoint, Motion, and VAE models into their correct folders.
  • Load an existing workflow, link your models, and run your first prompt.

Part 0: Prerequisites

Before you begin, ensure your system meets these requirements:

  • NVIDIA GPU: A powerful, modern NVIDIA graphics card (RTX 20, 30, or 40 series) with at least 8 GB of VRAM is strongly recommended for a smooth experience.
  • Git: You need Git to clone repositories from GitHub. If you don't have it, download and install it from git-scm.com.
  • Sufficient Disk Space: The AI models are large. You will need at least 20-25 GB of free space.

Part 1: Installing ComfyUI

ComfyUI is a node-based graphical user interface for Stable Diffusion. It's the engine we will use to run AnimateDiff. We'll use the easy "standalone" installation.

  • Create a Folder: Create a dedicated folder for your AI tools, for example, C:\AI\.
  • Download ComfyUI: Go to the official ComfyUI GitHub releases page: https://github.com/comfyanonymous/ComfyUI/releases.
  • Find the Download: Look for the latest release and find the "Direct link to download" in the assets list. It will be a .7z file. Download it.
  • Extract the File: Use a program like 7-Zip or WinRAR to extract the contents of the downloaded .7z file into your C:\AI\ folder. You will now have a folder named something like C:\AI\ComfyUI_windows_portable.
  • Run ComfyUI: Open the ComfyUI_windows_portable folder and run the run_nvidia_gpu.bat file.

A command prompt window will open and download some necessary files. This might take a few minutes. Once it's done, it will give you a local URL (e.g., http://127.0.0.1:8188). Your web browser should open to this address automatically, showing the ComfyUI interface. You now have the base interface running! You can close the command prompt and browser for now.

Part 2: Installing the AnimateDiff Manager & Nodes

The easiest way to add new features to ComfyUI is by using the ComfyUI Manager.

  • Open Command Prompt: Navigate to the main ComfyUI directory. In the ComfyUI_windows_portable folder, go into the ComfyUI sub-folder. Your path should look like C:\AI\ComfyUI_windows_portable\ComfyUI\.
  • Navigate to Custom Nodes: Now, navigate into the custom_nodes folder inside it. The path will be C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\.
  • Clone the Manager: In the command prompt (or by right-clicking in the folder and selecting "Open Git Bash here" or "Open in Terminal"), run the following command:
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
  • Restart ComfyUI: Go back to the main ComfyUI_windows_portable folder and run run_nvidia_gpu.bat again.

You will now see a new "Manager" button at the bottom of the node menu in the ComfyUI web interface. We will use this to install the required nodes for AnimateDiff.

Install AnimateDiff Evolved:

  • Click the new "Manager" button.
  • Click on "Install Custom Nodes".
  • Search for AnimateDiff.
  • Find "ComfyUI-AnimateDiff-Evolved" and click the "Install" button next to it.
  • Wait for it to install, then close the ComfyUI command prompt window and restart it by running run_nvidia_gpu.bat one more time.

Part 3: Downloading the AI Models

This is the most critical step. You need several types of models, and they must be placed in the correct folders. The base path for your models is: C:\AI\ComfyUI_windows_portable\ComfyUI\models\

  • Checkpoint Model (for image style)

    This is the main Stable Diffusion model that determines the visual style.

  • AnimateDiff-lightning Motion Model

    This is the special "lightning" model that makes the animation fast.

    • Model: ADE_AnimateDiff-Lightning_8steps.safetensors (The 8-step model is a great starting point).
    • Download from: huggingface.co/ByteDance/AnimateDiff-Lightning
    • Place in: ...ComfyUI\models\animatediff_models\ (You may need to create the animatediff_models folder).
  • Motion Adapter Model (Legacy)

    AnimateDiff still often relies on a base motion model.

  • VAE (Optional but Recommended)

    This improves color and detail, preventing washed-out images.

After downloading, your models directory should look something like this:

ComfyUI/
└── models/
    ├── checkpoints/
    │   └── v1-5-pruned-emaonly.safetensors
    ├── animatediff_models/
    │   ├── ADE_AnimateDiff-Lightning_8steps.safetensors
    │   └── mm_sd_v15_v2.ckpt
    └── vae/
        └── vae-ft-mse-840000-ema-pruned.safetensors

Part 4: Generating Your First Video

  • Start ComfyUI: Run run_nvidia_gpu.bat.
  • Load a Workflow: The easiest way to start is by using a pre-made workflow. Go to the AnimateDiff-Lightning Hugging Face page and find an example image. Many ComfyUI images have the workflow embedded in them. Right-click and save the example image from their page to your computer, then drag and drop the saved image file directly onto the ComfyUI canvas. This should automatically load all the necessary nodes.
  • Configure the Nodes: The workflow will be loaded, but you need to check that it's pointing to your downloaded models.
  • Load Checkpoint: Make sure this node has v1-5-pruned-emaonly.safetensors selected.
  • AnimateDiff Loader: Make sure this node has mm_sd_v15_v2.ckpt selected as the model_name and ADE_AnimateDiff-Lightning_8steps.safetensors selected as the lora_name.
  • Prompt Text: Find the nodes labeled "Positive Prompt" and "Negative Prompt". Change the text to describe the video you want to create (e.g., Positive: "a majestic lion walking in the savanna, cinematic").
  • KSampler: The settings here are crucial for lightning models. Ensure they are set to:
    • steps: 8
    • cfg: 1.5 to 2.0
    • sampler_name: euler
    • scheduler: sgm_uniform
  • Queue Prompt: Once everything is set, click the "Queue Prompt" button in the menu.
  • Watch it Generate: You will see the nodes light up with a green border as the process runs. The command prompt window will show a progress bar. Because we are using a lightning model, this should only take a few seconds!
  • View Output: When finished, look for a Save Animated WEBP/GIF node. You can preview your video there. The final file will be saved in the C:\AI\ComfyUI_windows_portable\ComfyUI\output folder.

Troubleshooting & Tips

  • CUDA out of memory Error: Your GPU doesn't have enough VRAM. Try lowering the video resolution (in the "Empty Latent Image" node) or close other programs using the GPU. You can also run ComfyUI in a low VRAM mode by editing the run_nvidia_gpu.bat file and adding the --lowvram flag to the command line.
  • Model Not Found Error: Double-check that you have placed all the models in the exact folders specified in Part 3. Refresh your ComfyUI browser page after adding new models.
  • Missing Nodes Error: If a workflow fails to load because of missing custom nodes, use the "Manager" -> "Install Missing Custom Nodes" feature. It will automatically detect and install what's needed.