floyo logo
Powered by
ThinkDiffusion
floyo logo
Powered by
ThinkDiffusion

Wan2.2 Fun Camera for Camera Control

705

Introduction

The Wan2.2 Fun model is a next-generation image-to-video AI system built on the Wan2.2 architecture, specifically designed for creators who want precise and natural camera motion in their generative videos. With integrated “Camera Control Codes” and advanced multimodal conditioning, this model empowers users to transform a static image into cinematic video sequences, complete with programmable camera moves no manual animation or editing required.

1. Camera Motion Control

  • Multiple Camera Modes: Supports pan up, down, left, right, zoom in/out, and combinations for customized camera trajectories.​

  • Trajectory Control: Lets you guide the camera’s path and orchestrate smooth transitions between scenes.

  • Integration with Prompts: Combine descriptive prompts with movement codes for precise composition control.

2. High-Quality Video Generation

  • Cinematic-Look Outputs: Trained on curated datasets for lighting, contrast, color tone, and scene composition, outputting aesthetically rich videos.​

  • Stable Motion Synthesis: Mixture-of-Experts (MoE) architecture stabilizes video generation, keeping camera moves realistic and reducing jitter.​

  • Resolution and Speed: Supports 720P at 24fps video synthesis on consumer hardware

Recommended Settings:

  • Set the 30 steps and full model if you need a high quality of video.

Read more

N
Generates in about 7 mins 24 secs

Nodes & Models

UNETLoader
wan2.2_fun_camera_high_noise_14B_fp8_scaled.safetensors
wan2.2_fun_camera_low_noise_14B_fp8_scaled.safetensors
LoadImage
WanCameraEmbedding
VAELoader
wan_2.1_vae.safetensors
CLIPLoader
umt5_xxl_fp8_e4m3fn_scaled.safetensors
umt5_xxl_fp16.safetensors
MarkdownNote
Note
LoraLoaderModelOnly
wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors
wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors
CLIPTextEncode
ModelSamplingSD3
WanCameraImageToVideo
KSamplerAdvanced
VAEDecode
CreateVideo
SaveVideo

Introduction

The Wan2.2 Fun model is a next-generation image-to-video AI system built on the Wan2.2 architecture, specifically designed for creators who want precise and natural camera motion in their generative videos. With integrated “Camera Control Codes” and advanced multimodal conditioning, this model empowers users to transform a static image into cinematic video sequences, complete with programmable camera moves no manual animation or editing required.

1. Camera Motion Control

  • Multiple Camera Modes: Supports pan up, down, left, right, zoom in/out, and combinations for customized camera trajectories.​

  • Trajectory Control: Lets you guide the camera’s path and orchestrate smooth transitions between scenes.

  • Integration with Prompts: Combine descriptive prompts with movement codes for precise composition control.

2. High-Quality Video Generation

  • Cinematic-Look Outputs: Trained on curated datasets for lighting, contrast, color tone, and scene composition, outputting aesthetically rich videos.​

  • Stable Motion Synthesis: Mixture-of-Experts (MoE) architecture stabilizes video generation, keeping camera moves realistic and reducing jitter.​

  • Resolution and Speed: Supports 720P at 24fps video synthesis on consumer hardware

Recommended Settings:

  • Set the 30 steps and full model if you need a high quality of video.

Read more

N
FloYo: Wan2.2 Fun Camera for Camera Control