This repository is the official implementation of AnimateDiff [ICLR2024 Spotlight].
It is a plug-and-play module turning most community text-to-image models into animation generators, without the need of additional training.
undefinedAnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuningundefined
Yuwei Guo,
Ceyuan Yang✝,
Anyi Rao,
Zhengyang Liang,
Yaohui Wang,
Yu Qiao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(✝Corresponding Author)
undefinedNote:undefined The main branch is for Stable Diffusion V1.5; for Stable Diffusion XL, please refer sdxl-beta branch.
More results can be found in the Gallery.
Some of them are contributed by the community.
![]() |
![]() |
![]() |
![]() |
Model:ToonYou
![]() |
![]() |
![]() |
![]() |
Model:Realistic Vision V2.0
undefinedNote:undefined AnimateDiff is also offically supported by Diffusers.
Visit AnimateDiff Diffusers Tutorial for more details.
Following instructions is for working with this repository.
undefinedNote:undefined For all scripts, checkpoint downloading will be automatically handled, so the script running may take longer time when first executed.
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
pip install -r requirements.txt
The generated samples can be found in samples/ folder.
python -m scripts.animate --config configs/prompts/1_animate/1_1_animate_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_2_animate_FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_3_animate_ToonYou.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_4_animate_MajicMix.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_5_animate_RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_6_animate_Lyriel.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_7_animate_Tusun.yaml
python -m scripts.animate --config configs/prompts/2_motionlora/2_motionlora_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_1_sparsectrl_i2v.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_2_sparsectrl_rgb_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_3_sparsectrl_sketch_RealisticVision.yaml
We created a Gradio demo to make AnimateDiff easier to use.
By default, the demo will run at localhost:7860.
python -u app.py
undefinedAnimateDiff aims to learn transferable motion priors that can be applied to other variants of Stable Diffusion family.undefined
To this end, we design the following training pipeline consisting of three stages.
In 1. Alleviate Negative Effects stage, we train the domain adapter, e.g., v3_sd15_adapter.ckpt, to fit defective visual aritfacts (e.g., watermarks) in the training dataset.
This can also benefit the distangled learning of motion and spatial appearance.
By default, the adapter can be removed at inference. It can also be integrated into the model and its effects can be adjusted by a lora scaler.
In 2. Learn Motion Priors stage, we train the motion module, e.g., v3_sd15_mm.ckpt, to learn the real-world motion patterns from videos.
In 3. (optional) Adapt to New Patterns stage, we train MotionLoRA, e.g., v2_lora_ZoomIn.ckpt, to efficiently adapt motion module for specific motion patterns (camera zooming, rolling, etc.).
undefinedSparseCtrl aims to add more control to text-to-video models by adopting some sparse inputs (e.g., few RGB images or sketch inputs).undefined
Its technicall details can be found in the following paper:
undefinedSparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Modelsundefined
Yuwei Guo,
Ceyuan Yang✝,
Anyi Rao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(✝Corresponding Author)
In this version, we use Domain Adapter LoRA for image model finetuning, which provides more flexiblity at inference.
We also implement two (RGB image/scribble) SparseCtrl encoders, which can take abitary number of condition maps to control the animation contents.
| Name | HuggingFace | Type | Storage | Description |
|---|---|---|---|---|
v3_adapter_sd_v15.ckpt |
Link | Domain Adapter | 97.4 MB | |
v3_sd15_mm.ckpt.ckpt |
Link | Motion Module | 1.56 GB | |
v3_sd15_sparsectrl_scribble.ckpt |
Link | SparseCtrl Encoder | 1.86 GB | scribble condition |
v3_sd15_sparsectrl_rgb.ckpt |
Link | SparseCtrl Encoder | 1.85 GB | RGB image condition |
| Input (by RealisticVision) | Animation | Input | Animation |
![]() |
![]() |
![]() |
![]() |
| Input Scribble | Output | Input Scribbles | Output |
![]() |
![]() |
![]() |
![]() |
Release the Motion Module (beta version) on SDXL, available at Google Drive / HuggingFace / CivitAI. High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced with/without personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., sampling steps), depending on the chosen personalized models.
Checkout to the branch sdxl for more details of the inference.
| Name | HuggingFace | Type | Storage Space |
|---|---|---|---|
mm_sdxl_v10_beta.ckpt |
Link | Motion Module | 950 MB |
| Original SDXL | Community SDXL | Community SDXL |
![]() |
![]() |
![]() |
In this version, the motion module mm_sd_v15_v2.ckpt (Google Drive / HuggingFace / CivitAI) is trained upon larger resolution and batch size.
We found that the scale-up training significantly helps improve the motion quality and diversity.
We also support MotionLoRA of eight basic camera movements.
MotionLoRA checkpoints take up only 77 MB storage per model, and are available at Google Drive / HuggingFace / CivitAI.
| Name | HuggingFace | Type | Parameter | Storage |
|---|---|---|---|---|
mm_sd_v15_v2.ckpt |
Link | Motion Module | 453 M | 1.7 GB |
v2_lora_ZoomIn.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_ZoomOut.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_PanLeft.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_PanRight.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_TiltUp.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_TiltDown.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_RollingClockwise.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
v2_lora_RollingAnticlockwise.ckpt |
Link | MotionLoRA | 19 M | 74 MB |
| Zoom In | Zoom Out | Zoom Pan Left | Zoom Pan Right | ||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
| Tilt Up | Tilt Down | Rolling Anti-Clockwise | Rolling Clockwise | ||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Here’s a comparison between mm_sd_v15.ckpt (left) and improved mm_sd_v15_v2.ckpt (right).
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The first version of AnimateDiff!
Please check Steps for Training for details.
AnimateDiff for Stable Diffusion WebUI: sd-webui-animatediff (by @continue-revolution)
AnimateDiff for ComfyUI: ComfyUI-AnimateDiff-Evolved (by @Kosinkadink)
Google Colab: Colab (by @camenduru)
This project is released for academic use.
We disclaim responsibility for user-generated content.
Also, please be advised that our only official website are https://github.com/guoyww/AnimateDiff and https://animatediff.github.io, and all the other websites are NOT associated with us at AnimateDiff.
Yuwei Guo: guoyw@ie.cuhk.edu.hk
Ceyuan Yang: limbo0066@gmail.com
Bo Dai: doubledaibo@gmail.com
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={International Conference on Learning Representations},
year={2024}
}
@article{guo2023sparsectrl,
title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2311.16933},
year={2023}
}
Codebase built upon Tune-a-Video.
We use cookies
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.