floyo logo
Powered by
ThinkDiffusion
Webinar: Qwen 2511 for Multi Angle & Relighting w Sebastian Kamph. Sign up now 👉🏽
floyo logo
Powered by
ThinkDiffusion
Webinar: Qwen 2511 for Multi Angle & Relighting w Sebastian Kamph. Sign up now 👉🏽

Filmmaking & Animation FloPack

fl
floyoofficial
Overview

There are many possible paths to get from a character idea to consistent and controllable character renders because everyone has different ways of thinking and working, unique starting assets and output requirements.

So here's a curated selection of several ways to get from A to B, designed to be as flexible as the creative process.

SECTION 1

Character design

Depending on what kind of ideas, sketches or images you're starting with, there are different workflows to turn those into final character images that you're happy with and ready to use for training.

Start with a written description, a sketch or an existing image for starters. Add in a face of someone if there's a particular person you have in mind.
Step 1
Multi-Image  Flux Ultra, Pro, Dev, Recraft+

INPUTS

Text Prompt

OUTPUTS

Multiple Images

Multi-Image Flux Ultra, Pro, Dev, Recraft+

Text2Image + Prompt Enhancer LLM for Flux

INPUTS

Text Prompt

OUTPUTS

Image

Text2Image + Prompt Enhancer LLM for Flux

Flux Text to Character Sheet

INPUTS

Text Prompt

OUTPUTS

Character Sheet

Flux Text to Character Sheet

Flux Kontext - Sketch to Image

INPUTS

Text Prompt
Image

OUTPUTS

Image

Flux Kontext - Sketch to Image

Flux Kontext - Quick & Easy

INPUTS

Text Prompt
Image

OUTPUTS

Image

Flux Kontext - Quick & Easy

SMART FACE SWAPPER - Ace++ Flux Face swap

INPUTS

Text Prompt
Image

OUTPUTS

Image

SMART FACE SWAPPER - Ace++ Flux Face swap

Image to Character Spin

INPUTS

Text Prompt
Image

OUTPUTS

Video

Image to Character Spin

SECTION 2

Training Set Creation & LoRA Training

Once you have an image of a character that you like, it's time to create a variety of images that are based on that initial image. A character sheet is a good start although we recommend additional variations that have different environments and lighting, so that the LoRA doesn't associate the character with image properties and elements that will change on and around them.

In general, the more robust the variety of inputs, the more flexible and controllable the outputs will be while retaining the true essence of what the character is and isn't. Don't worry if there's minor inconsistencies in the input images, this is why we're creating a model in the first place!
Step 1
Character Consistency - 1 image only. Flux Ace++

INPUTS

Image

OUTPUTS

Multiple Images

Character Consistency - 1 image only. Flux Ace++

Flux Kontext - Single Image to Character LoRA

INPUTS

Image

OUTPUTS

Multiple Images

Flux Kontext - Single Image to Character LoRA

Fast LoRA Training for Flux via Floyo API

INPUTS

Multiple Images

OUTPUTS

LoRA

Fast LoRA Training for Flux via Floyo API

ComfyUI Flux LoRA Trainer

INPUTS

Multiple Images

OUTPUTS

LoRA

ComfyUI Flux LoRA Trainer

SECTION 3

Image Creation & Manipulation

Once your LoRA is created, there are many ways to generate consistent images from that model, and manipulate it to match your ideas and requirements.
Step 1
Text to Image + LoRA model

INPUTS

Text Prompt
LoRA

OUTPUTS

Image

Text to Image + LoRA model

Text to Image with Multi-LoRA

INPUTS

Text Prompt
Multiple LoRAs

OUTPUTS

Image

Text to Image with Multi-LoRA

Flux Inpaint - ULTIMATE workflow.

INPUTS

Image
Brushed Mask

OUTPUTS

Image

Flux Inpaint - ULTIMATE workflow.

Flux ControlNet 2.0 - All-in-one

INPUTS

Image
Brushed Mask

OUTPUTS

Image

Flux ControlNet 2.0 - All-in-one

Flux Kontext Inpainting

INPUTS

Image
Brushed Mask

OUTPUTS

Image

Flux Kontext Inpainting

Image to 3D with Hunyuan3D w/ Texture Upscale

INPUTS

Image
Text Prompt

OUTPUTS

3D Model
Texture map
Normal Map

Image to 3D with Hunyuan3D w/ Texture Upscale

SECTION 4

Video & Animating

Bring your assets to life. These workflows add motion with precisely the level of control you want, ranging from a single text prompt to detailed image, video, and audio references.
Step 1
Text to Video and Wan with optional LoRA

INPUTS

Text Prompt
LoRA

OUTPUTS

Video

Text to Video and Wan with optional LoRA

Image to Video with Wan

INPUTS

Text Prompt

OUTPUTS

Image
Video

Image to Video with Wan

Wan2.1 FusionX Image2Video

INPUTS

Text Prompt

OUTPUTS

Image
Video

Wan2.1 FusionX Image2Video

Wan2.1 Start & End Frame Image to Video

INPUTS

Text Prompt
Start Image
End Image

OUTPUTS

Video

Wan2.1 Start & End Frame Image to Video

Start/End Frame Multi-Video via Floyo API

INPUTS

Text Prompt
Start Image
End Image

OUTPUTS

Multiple Videos

Start/End Frame Multi-Video via Floyo API

Video to Video Restyle with Wan

INPUTS

Text Prompt
Start Image
Video

OUTPUTS

Video

Video to Video Restyle with Wan

SECTION 5

Audio, LipSync and VFX

Elevate your footage with cinematic polish. These workflows layer in rich audio, precise lip-sync, dynamic effects, adaptive camera moves, smart masking, outpainting, and more: everything you need to edit or enhance existing scenes.
Step 1
MMAudio: Video to Synced Audio

INPUTS

Video
Text Prompt

OUTPUTS

Audio clip

MMAudio: Video to Synced Audio

Video Masking with Sam2 Comparison

INPUTS

Video
Tracking markers

OUTPUTS

Video

Video Masking with Sam2 Comparison

Wan2.1 and VACE for Video to Video Outpainting

INPUTS

Video
Text Prompt

OUTPUTS

Video

Wan2.1 and VACE for Video to Video Outpainting

Wan2.1 and RecamMaster for V2V Camera Control

INPUTS

Video
Camera selection

OUTPUTS

Video

Wan2.1 and RecamMaster for V2V Camera Control

Wan2.1 and FantasyTalking - Image2Video Lipsync

INPUTS

Text Prompt
Image
Audio clip

OUTPUTS

Video

Wan2.1 and FantasyTalking - Image2Video Lipsync