MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
Zhongcong Xu
·
Jianfeng Zhang
·
Jun Hao Liew
·
Hanshu Yan
·
Jia-Wei Liu
·
Chenxu Zhang
·
Jiashi Feng
·
Mike Zheng Shou
National University of Singapore | ByteDance
📢 News
- [2023.12.4] Release inference code and gradio demo. We are working to improve MagicAnimate, stay tuned!
- [2023.11.23] Release MagicAnimate paper and project page.
🏃♂️ Getting Started
Download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.
Download our MagicAnimate checkpoints.
Please follow the huggingface download instructions to download the above models and checkpoints, git lfs
is recommended.
Place the based models and checkpoints as follows:
magic-animate
|----pretrained_models
|----MagicAnimate
|----appearance_encoder
|----diffusion_pytorch_model.safetensors
|----config.json
|----densepose_controlnet
|----diffusion_pytorch_model.safetensors
|----config.json
|----temporal_attention
|----temporal_attention.ckpt
|----sd-vae-ft-mse
|----config.json
|----diffusion_pytorch_model.safetensors
|----stable-diffusion-v1-5
|----scheduler
|----scheduler_config.json
|----text_encoder
|----config.json
|----pytorch_model.bin
|----tokenizer
…