HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation
undefinedHunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animationundefined
If you develop/use HunyuanPortrait in your projects, welcome to let us know/sumbit a PR! π
git clone https://github.com/Tencent-Hunyuan/HunyuanPortrait
pip3 install torch torchvision torchaudio
pip3 install -r requirements.txt
All models are stored in pretrained_weights by default:
pip3 install "huggingface_hub[cli]"
cd pretrained_weights
huggingface-cli download --resume-download stabilityai/stable-video-diffusion-img2vid-xt --local-dir . --include "*.json"
wget -c https://huggingface.co/LeonJoe13/Sonic/resolve/main/yoloface_v5m.pt
wget -c https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/vae/diffusion_pytorch_model.fp16.safetensors -P vae
wget -c https://huggingface.co/FoivosPar/Arc2Face/resolve/da2f1e9aa3954dad093213acfc9ae75a68da6ffd/arcface.onnx
huggingface-cli download --resume-download tencent/HunyuanPortrait --local-dir hyportrait
And the file structure is as follows:
.
βββ arcface.onnx
βββ hyportrait
β βββ dino.pth
β βββ expression.pth
β βββ headpose.pth
β βββ image_proj.pth
β βββ motion_proj.pth
β βββ pose_guider.pth
β βββ unet.pth
βββ scheduler
β βββ scheduler_config.json
βββ unet
β βββ config.json
βββ vae
β βββ config.json
β βββ diffusion_pytorch_model.fp16.safetensors
βββ yoloface_v5m.pt
π₯ Live your portrait by executing bash demo.sh
video_path="your_video.mp4"
image_path="your_image.png"
python inference.py \
--config config/hunyuan-portrait.yaml \
--video_path $video_path \
--image_path $image_path
Or use a Gradio Server:
python gradio_app.py
HunyuanPortrait is a diffusion-based framework for generating lifelike, temporally consistent portrait animations by decoupling identity and motion using pre-trained encoders. It encodes driving video expressions/poses into implicit control signals, injects them via attention-based adapters into a stabilized diffusion backbone, enabling detailed and style-flexible animation from a single reference image. The method outperforms existing approaches in controllability and coherence.
Some results of portrait animation using HunyuanPortrait.
More results can be found on our Project page.
https://github.com/user-attachments/assets/4b963f42-48b2-4190-8d8f-bbbe38f97ac6
https://github.com/user-attachments/assets/48c8c412-7ff9-48e3-ac02-48d4c5a0633a
https://github.com/user-attachments/assets/bdd4c1db-ed90-4a24-a3c6-3ea0b436c227
The code is based on SVD, DiNOv2, Arc2Face, YoloFace. We thank the authors for their open-sourced code and encourage users to cite their works when applicable.
Stable Video Diffusion is licensed under the Stable Video Diffusion Research License, Copyright Β© Stability AI Ltd. All Rights Reserved.
This codebase is intended solely for academic purposes.
If you think this project is helpful, please feel free to leave a starβοΈβοΈβοΈ and cite our paper:
@article{xu2025hunyuanportrait,
title={HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation},
author={Xu, Zunnan and Yu, Zhentao and Zhou, Zixiang and Zhou, Jun and Jin, Xiaoyu and Hong, Fa-Ting and Ji, Xiaozhong and Zhu, Junwei and Cai, Chengfei and Tang, Shiyu and Lin, Qin and Li, Xiu and Lu, Qinglin},
journal={arXiv preprint arXiv:2503.18860},
year={2025}
}
We use cookies
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.