floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰
floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰

LoRA Training Video with Hunyuan

Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own. Quick start recipe Upload a Zip file of curated images + captions. Enable is_style and include a unique trigger phrase. Compare checkpoints with a range of steps to find the sweet spot. Training overview This workflow trains a Hunyuan LoRa using the Fal.ai API. Since it's not training in ComfyUI directly, you can run this workflow on a Quick machine. Processing takes 5-10 minutes with the default settings. Open the ComfyUI directory and navigate to the ../input/ folder, from there you will create a new folder and upload your image data set there. If you create a new folder named "Test", the path should look like: ../user_data/comfyui/input/Test The default settings work very well – if you would like to add a trigger word for your LoRa, you can do that in the Trigger Word field. Once finished, a new URL will appear in your Preview Text node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser. Prepping your dataset Fewer, ultra‑high‑res (≈1024×1024+) images beat many low‑quality ones. Every image must clearly represent the style or individual and be artifact‑free. For people, aim for at least 10-20 images in different background (5 headshots, 5 wholebody, 5 halfbody, 5 in other scene) Captioning Give the style a unique trigger phrase (so as not be confused with a regular word or term). For better prompt control, add custom captions that describe content only—leave style cues to the trigger phrase. Create accompanying .txt files with the same name as the image its describing. If you do add custom captions, be sure to turn on is_style to skip auto‑captioning. It is set to off by default. Training steps The default is set to around 2000, but you can train multiple checkpoints (e.g., 500, 1000, 1500, 2000) and pick the one that balances style fidelity with prompt responsiveness. Too few steps: the character becomes less realistic or the style fades. Too many steps: model overfits and stops obeying prompts. Output Path Will be a URL in the Preview Text Node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser.

393

Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own.

Quick start recipe

  • Upload a Zip file of curated images + captions.

  • Enable is_style and include a unique trigger phrase.

  • Compare checkpoints with a range of steps to find the sweet spot.

Training overview

This workflow trains a Hunyuan LoRa using the Fal.ai API.

Processing takes 5-10 minutes with the default settings.

If you create a new folder named "Test", the path should look like:

../user_data/comfyui/input/Test

The default settings work very well – if you would like to add a trigger word for your LoRa, you can do that in the Trigger Word field.

Once finished, a new URL will appear in your Preview Text node. Copy the URL and paste the path directly to your browser to download the LoRa.

Prepping your dataset

Fewer, ultra‑high‑res (≈1024×1024+) images beat many low‑quality ones.

Every image must clearly represent the style or individual and be artifact‑free.

For people, aim for at least 10-20 images in different background (5 headshots, 5 wholebody, 5 halfbody, 5 in other scene)

Captioning

  • Give the style a unique trigger phrase (so as not be confused with a regular word or term).

  • For better prompt control, add custom captions that describe content only—leave style cues to the trigger phrase. Create accompanying .txt files with the same name as the image its describing.

  • If you do add custom captions, be sure to turn on is_style to skip auto‑captioning. It is set to off by default.

Training steps

  • The default is set to around 2000, but you can train multiple checkpoints (e.g., 500, 1000, 1500, 2000) and pick the one that balances style fidelity with prompt responsiveness.

  • Too few steps: the character becomes less realistic or the style fades.

  • Too many steps: model overfits and stops obeying prompts.

Output Path

Will be a URL in the Preview Text Node.

Copy the URL and paste the path directly to your ../comfyui/models/LoRA/

by using the Upload by URL feature in the file browser.

Read more

Generates in about 5 mins 52 secs

Nodes & Models

Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own.

Quick start recipe

  • Upload a Zip file of curated images + captions.

  • Enable is_style and include a unique trigger phrase.

  • Compare checkpoints with a range of steps to find the sweet spot.

Training overview

This workflow trains a Hunyuan LoRa using the Fal.ai API.

Processing takes 5-10 minutes with the default settings.

If you create a new folder named "Test", the path should look like:

../user_data/comfyui/input/Test

The default settings work very well – if you would like to add a trigger word for your LoRa, you can do that in the Trigger Word field.

Once finished, a new URL will appear in your Preview Text node. Copy the URL and paste the path directly to your browser to download the LoRa.

Prepping your dataset

Fewer, ultra‑high‑res (≈1024×1024+) images beat many low‑quality ones.

Every image must clearly represent the style or individual and be artifact‑free.

For people, aim for at least 10-20 images in different background (5 headshots, 5 wholebody, 5 halfbody, 5 in other scene)

Captioning

  • Give the style a unique trigger phrase (so as not be confused with a regular word or term).

  • For better prompt control, add custom captions that describe content only—leave style cues to the trigger phrase. Create accompanying .txt files with the same name as the image its describing.

  • If you do add custom captions, be sure to turn on is_style to skip auto‑captioning. It is set to off by default.

Training steps

  • The default is set to around 2000, but you can train multiple checkpoints (e.g., 500, 1000, 1500, 2000) and pick the one that balances style fidelity with prompt responsiveness.

  • Too few steps: the character becomes less realistic or the style fades.

  • Too many steps: model overfits and stops obeying prompts.

Output Path

Will be a URL in the Preview Text Node.

Copy the URL and paste the path directly to your ../comfyui/models/LoRA/

by using the Upload by URL feature in the file browser.

Read more