floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰
floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰

Image-to-Video with Reference Video (Prompt-Based Camera Rotation)

171

What this workflow does

Converts a single front-facing image into a short video that rotates the camera ~90° (clockwise or anticlockwise) to create an over-the-shoulder view. The motion comes from a reference video that is turned into a pose track (DWPose); that pose track drives the image-to-video model so the subject’s pose and likeness stay consistent. Pose-guided I2V with DWPose/ControlNet is a standard pattern.

Inputs

  • Reference Image: your original front shot.

  • Reference Video or DWPose track: the rotation motion you want to follow (DWPose is commonly used for pose guidance).

  • Prompt: to influence style/look.

  • Image-to-Video Model: e.g., WAN 2.2 Animate/I2V for motion transfer with good identity hold.

How to use

  1. Load the image and the reference video (or its DWPose output).

  2. Set rotation range (about 90° for OTS).

  3. Run to generate a short clip; save the last frame if you need a clean still for dialogue setups.

Tips

  • Match the first pose of the reference to the image pose to reduce drift. (Common guidance in pose-driven ComfyUI tutorials.)

  • If identity slips, try a reference-friendly I2V (WAN 2.2 variants) or adjust seeds.

  • Short pose clips (e.g., ~81 frames) run faster while still giving a smooth turn; use 9:16 if you need vertical output.

Notes

This is pose-driven reframing, not full 3D reconstruction. As text/image-to-video models improve, multi-character placement and camera re-angling may need less control video. (Current ComfyUI ecosystems widely pair DWPose/ControlNet with I2V for this use.)

Read more

N
EXTENSIONS
Generates in about -- secs

Nodes & Models

What this workflow does

Converts a single front-facing image into a short video that rotates the camera ~90° (clockwise or anticlockwise) to create an over-the-shoulder view. The motion comes from a reference video that is turned into a pose track (DWPose); that pose track drives the image-to-video model so the subject’s pose and likeness stay consistent. Pose-guided I2V with DWPose/ControlNet is a standard pattern.

Inputs

  • Reference Image: your original front shot.

  • Reference Video or DWPose track: the rotation motion you want to follow (DWPose is commonly used for pose guidance).

  • Prompt: to influence style/look.

  • Image-to-Video Model: e.g., WAN 2.2 Animate/I2V for motion transfer with good identity hold.

How to use

  1. Load the image and the reference video (or its DWPose output).

  2. Set rotation range (about 90° for OTS).

  3. Run to generate a short clip; save the last frame if you need a clean still for dialogue setups.

Tips

  • Match the first pose of the reference to the image pose to reduce drift. (Common guidance in pose-driven ComfyUI tutorials.)

  • If identity slips, try a reference-friendly I2V (WAN 2.2 variants) or adjust seeds.

  • Short pose clips (e.g., ~81 frames) run faster while still giving a smooth turn; use 9:16 if you need vertical output.

Notes

This is pose-driven reframing, not full 3D reconstruction. As text/image-to-video models improve, multi-character placement and camera re-angling may need less control video. (Current ComfyUI ecosystems widely pair DWPose/ControlNet with I2V for this use.)

Read more

N
EXTENSIONS