floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰
floyo logobeta logo
Powered by
ThinkDiffusion
Lock in a year of flow. Get 50% off your first year. Limited time offer. Claim now ⏰

floyoofficial

Bio under construction. Expect wild opinions & mistakes. Always learning, iterating. Here for good prompts, great lighting & snacks.

OG badge
OG badge

401

Total Likes

55350

Total Views

160

My Workflows

Wan2.2 14b - Image to Video w/ Optional Last Frame
Preview 0

Image to Video

Game Development

Animation

Filmmaking

Wan2.2

First and last frame

Wan2.2 14b - Image to Video w/ Optional Last Frame

1.1k

Wan 2.5 Image to Video

Image2Video

Wan

API

Floyo API

Wan2.5 Preview

Wan 2.5 Image to Video

Fast LoRA Training for Flux via Floyo API

Flux

LoRa Training

API

FLUX is great at generating images, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own.

Fast LoRA Training for Flux via Floyo API

FLUX is great at generating images, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own.

Flux Kontext Sketch to LineArt + Color Previz

Sketch to Image

Lineart

Previz

Flux Kontext

Quickly convert rough sketches into polished lineart and colorized concepts. Ideal for early storyboards, character designs, scene planning, and other visual explorations.

Flux Kontext Sketch to LineArt + Color Previz

Quickly convert rough sketches into polished lineart and colorized concepts. Ideal for early storyboards, character designs, scene planning, and other visual explorations.

Image to Character Spin

Image2Video

Wan2.1

360

See an image of a character spin 360 degrees. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: Default resolution settings are noted: Default image resize resolution works best for portrait images, if the image is landscape change from 480x832 to 832x480 Prompt: Follow example format: The video shows (describe the subject), performs a r0t4tion 360 degrees rotation. Denoise: The amount of variance in the new image. Higher has more variance. File Format: H.264 and more

Image to Character Spin

See an image of a character spin 360 degrees. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: Default resolution settings are noted: Default image resize resolution works best for portrait images, if the image is landscape change from 480x832 to 832x480 Prompt: Follow example format: The video shows (describe the subject), performs a r0t4tion 360 degrees rotation. Denoise: The amount of variance in the new image. Higher has more variance. File Format: H.264 and more

Flux Character LoRA Test and Compare

Flux

Text to Image

Game Development

Animation

Filmmaking

LoRA

Test and compare multiple epochs of a character LoRA side by side with preset prompts When training a LoRA, you'll usually have a few checkpoints throughout the process to test. This workflow lets you load up to 4 LoRAs to test side by side, making it easier to determine which one is right for you! Key Inputs: LoRA Loaders: Load each LoRA epoch for the same character in up to 4 groups. Groups Bypasser: Enable/disable groups as needed. If you only have 2 epochs to test, disable the back 2 groups! Triggerword: Simply add the trigger word for your LoRA and it will auto-fill in the default prompts. Leave blank if you're using your own custom prompts that include the trigger word. LoRA Testing Prompts: Default prompts work well to get an idea of how your character will look in different situations, but feel free to replace them with your own prompts (max 4).

Flux Character LoRA Test and Compare

Test and compare multiple epochs of a character LoRA side by side with preset prompts When training a LoRA, you'll usually have a few checkpoints throughout the process to test. This workflow lets you load up to 4 LoRAs to test side by side, making it easier to determine which one is right for you! Key Inputs: LoRA Loaders: Load each LoRA epoch for the same character in up to 4 groups. Groups Bypasser: Enable/disable groups as needed. If you only have 2 epochs to test, disable the back 2 groups! Triggerword: Simply add the trigger word for your LoRA and it will auto-fill in the default prompts. Leave blank if you're using your own custom prompts that include the trigger word. LoRA Testing Prompts: Default prompts work well to get an idea of how your character will look in different situations, but feel free to replace them with your own prompts (max 4).

Wan2.1 FusionX Image2Video

Wan

Image to Video

Created by @vrgamedevgirl on Civitai, please support the original creator!

Wan2.1 FusionX Image2Video

Created by @vrgamedevgirl on Civitai, please support the original creator!

Multi-Image  Flux Ultra, Pro, Dev, Recraft+

Flux

API

Text to Image

Start with a prompt, and get a different render from a range of unique models at the same time.

Multi-Image Flux Ultra, Pro, Dev, Recraft+

Start with a prompt, and get a different render from a range of unique models at the same time.

Create Character LoRA Dataset using Qwen Image Edit 2509

Dataset

LoRA

Qwen

Qwen Image Edit 2509

Create Character LoRA Dataset using Qwen Image Edit 2509

ComfyUI Flux LoRA Trainer

Flux

LORA Training

Created by @Kijai on Github, please support the original creator!

ComfyUI Flux LoRA Trainer

Created by @Kijai on Github, please support the original creator!

Flux Kontext - Sketch to Image

Flux

Kontext

Sketch to Image

Bring your sketches to life in full color with Flux Kontext! Key Inputs Load Image – Upload the sketch you want to transform. Prompt – Describe the desired output style, such as: “Render this sketch as a realistic photo” or “Turn this sketch into a watercolor painting.”

Flux Kontext - Sketch to Image

Bring your sketches to life in full color with Flux Kontext! Key Inputs Load Image – Upload the sketch you want to transform. Prompt – Describe the desired output style, such as: “Render this sketch as a realistic photo” or “Turn this sketch into a watercolor painting.”

Flux Text to Character Sheet

Controlnet

Flux

Character Sheet

Create a character and a range of consistent outputs suitable for establishing character consistency, training a model, and ensuring consistency throughout multiple scenes. Key Inputs Image reference: Use the included pose sheet to show range of positions Prompt: as descriptive a prompt as possible

Flux Text to Character Sheet

Create a character and a range of consistent outputs suitable for establishing character consistency, training a model, and ensuring consistency throughout multiple scenes. Key Inputs Image reference: Use the included pose sheet to show range of positions Prompt: as descriptive a prompt as possible

Flux Dev - Text to Image w/ Optional Image Input

Flux Dev

text to image

image to image

Flux Dev - Text to Image w/ Optional Image Input

Wan2.1 Fun Control and Flux for V2V Restyle

Controlnet

Flux

Wan2.1

Video2Video

Create a new video by restyling an existing video with a reference image.

Wan2.1 Fun Control and Flux for V2V Restyle

Create a new video by restyling an existing video with a reference image.

Wan2.1 and VACE for Video to Video Outpainting

Wan

Video to Video

Outpainting

Wan VACE video outpainting invites you to break free from the limits of the frame and explore endless creative possibilities.

Wan2.1 and VACE for Video to Video Outpainting

Wan VACE video outpainting invites you to break free from the limits of the frame and explore endless creative possibilities.

Qwen Image Edit 2509 Face Swap and Inpainting

Face Swap

Inpainting

Image2Image

Qwen

Qwen Image Edit 2509

Qwen Image Edit 2509 Face Swap and Inpainting

Flux Text to Image

Flux

Text2Image

Create original images using only text prompts, which can be simple or elaborate. Key Inputs Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted

Flux Text to Image

Create original images using only text prompts, which can be simple or elaborate. Key Inputs Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted

Wan2.1 and RecamMaster for V2V Camera Control

Wan

Video to Video

Recammaster

Adjust the camera angle of an existing video, like magic.

Wan2.1 and RecamMaster for V2V Camera Control

Adjust the camera angle of an existing video, like magic.

Qwen Image Edit 2509: Combine Multiple Images Into One Scene for Fashion, Products, Poses & more

Image2Image

Qwen

2509

Qwen Image Edit 2509: Combine Multiple Images Into One Scene for Fashion, Products, Poses & more

Qwen Image Edit - Edit Image Easily

Image2Image

Qwen

Qwen Image Edit

Qwen Image Edit - Edit Image Easily

Wan2.2 Animate Character

Video to Video

Animation

Filmmaking

Wan2.2

Animate

Wan2.2 Animate Character

Wan 2.6 Reference to Video

Video2Video

VFX

Wan2.6

Video Production

Wan 2.6 Reference to Video

Z-Image Turbo: Fast Image Generation in Seconds

Text2Image

Marketing

Z-Image Turbo

Production

Photography

Z-Image Turbo: Fast Image Generation in Seconds

Flux Kontext and HD360 LoRA for 360 Degree View

panorama

kontext

Flux

Flux Kontext

Image2Image

Flux Kontext 360° Workflow - Seamless Panorama Generation Input: Simply upload an image in the "Load Image from Outputs" node Output: A 360° Panoramic image

Flux Kontext and HD360 LoRA for 360 Degree View

Flux Kontext 360° Workflow - Seamless Panorama Generation Input: Simply upload an image in the "Load Image from Outputs" node Output: A 360° Panoramic image

SMART FACE SWAPPER - Ace++ Flux Face swap

faceswap

sebastian kamph

face swap

face

swap

ace

face swapper

Face swapper built with Flux and ACE++. Works with added details too, like hats, jewelry. Smart features, use natural language.

SMART FACE SWAPPER - Ace++ Flux Face swap

Face swapper built with Flux and ACE++. Works with added details too, like hats, jewelry. Smart features, use natural language.

Text to Image + LoRA model

LoRa

Flux

Text2Image

Create an image from a trained AI model of something specific ( a specific figure, outfit, art style, product etc) to ensure specific details within.

Text to Image + LoRA model

Create an image from a trained AI model of something specific ( a specific figure, outfit, art style, product etc) to ensure specific details within.

VEO3  Future of Video Creation

Text2Video

API

Floyo API

VEO3

VEO3 Future of Video Creation

Image to Image with Flux ControlNet

Controlnet

Image

Flux

Transform your images into something completely new, yet retaining specific details and composition from your original using flexible controls. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Denoise Strength: The amount of variance in the new image. Higher has more variance. Width & height: Try and match the aspect ratio of the original if possible.

Image to Image with Flux ControlNet

Transform your images into something completely new, yet retaining specific details and composition from your original using flexible controls. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Denoise Strength: The amount of variance in the new image. Higher has more variance. Width & height: Try and match the aspect ratio of the original if possible.

Video to Video with Control Image

AnimateDiff

SDXL

HotshotXL

Video2Video

Control Image

Breathe life into a character from an image reference using motion reference from a video. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly and the style of your shot Load Video: Use any Mp4 that you would like to use for motion reference

Video to Video with Control Image

Breathe life into a character from an image reference using motion reference from a video. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly and the style of your shot Load Video: Use any Mp4 that you would like to use for motion reference

Text to Image with Multi-LoRA

LoRa

Flux

Text2Image

Create consistent images with multiple LoRA models.

Text to Image with Multi-LoRA

Create consistent images with multiple LoRA models.

Flux Outfit Transfer

Flux

Image to Image

Ace+

Virtual Try-on

Fashion

Virtual Outfit Try-On with Auto Segmentation Try virtual clothing on any subject using Flux Dev, Ace Plus, and Redux, with automatic segmentation. Great for concept previews, fashion mockups, or character styling. Key Inputs Outfit: Load the outfit image you want to apply. Make sure it's high quality — visible artifacts or distortions may carry over into the final result. Actor: Add the subject or character you want to dress. Ideally, use a clear, front-facing image. Human Parts Ultra: Choose which parts of the body the clothing should apply to. For example, for a long-sleeve shirt, select: torso, left arm, and right arm. This helps the model align the clothing properly during generation. Prompt: Default value works for most outfits, however you may try to adjust it to describe the desired outfit.

Flux Outfit Transfer

Virtual Outfit Try-On with Auto Segmentation Try virtual clothing on any subject using Flux Dev, Ace Plus, and Redux, with automatic segmentation. Great for concept previews, fashion mockups, or character styling. Key Inputs Outfit: Load the outfit image you want to apply. Make sure it's high quality — visible artifacts or distortions may carry over into the final result. Actor: Add the subject or character you want to dress. Ideally, use a clear, front-facing image. Human Parts Ultra: Choose which parts of the body the clothing should apply to. For example, for a long-sleeve shirt, select: torso, left arm, and right arm. This helps the model align the clothing properly during generation. Prompt: Default value works for most outfits, however you may try to adjust it to describe the desired outfit.

LoRA Training Video with Hunyuan

Hunyuan

API

LORA Training

Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own. Quick start recipe Upload a Zip file of curated images + captions. Enable is_style and include a unique trigger phrase. Compare checkpoints with a range of steps to find the sweet spot. Training overview This workflow trains a Hunyuan LoRa using the Fal.ai API. Since it's not training in ComfyUI directly, you can run this workflow on a Quick machine. Processing takes 5-10 minutes with the default settings. Open the ComfyUI directory and navigate to the ../input/ folder, from there you will create a new folder and upload your image data set there. If you create a new folder named "Test", the path should look like: ../user_data/comfyui/input/Test The default settings work very well – if you would like to add a trigger word for your LoRa, you can do that in the Trigger Word field. Once finished, a new URL will appear in your Preview Text node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser. Prepping your dataset Fewer, ultra‑high‑res (≈1024×1024+) images beat many low‑quality ones. Every image must clearly represent the style or individual and be artifact‑free. For people, aim for at least 10-20 images in different background (5 headshots, 5 wholebody, 5 halfbody, 5 in other scene) Captioning Give the style a unique trigger phrase (so as not be confused with a regular word or term). For better prompt control, add custom captions that describe content only—leave style cues to the trigger phrase. Create accompanying .txt files with the same name as the image its describing. If you do add custom captions, be sure to turn on is_style to skip auto‑captioning. It is set to off by default. Training steps The default is set to around 2000, but you can train multiple checkpoints (e.g., 500, 1000, 1500, 2000) and pick the one that balances style fidelity with prompt responsiveness. Too few steps: the character becomes less realistic or the style fades. Too many steps: model overfits and stops obeying prompts. Output Path Will be a URL in the Preview Text Node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser.

LoRA Training Video with Hunyuan

Hunyuan is great at generating videos, but locking in a specific aesthetic or character is easier with a  LoRA. Here's how to create your own. Quick start recipe Upload a Zip file of curated images + captions. Enable is_style and include a unique trigger phrase. Compare checkpoints with a range of steps to find the sweet spot. Training overview This workflow trains a Hunyuan LoRa using the Fal.ai API. Since it's not training in ComfyUI directly, you can run this workflow on a Quick machine. Processing takes 5-10 minutes with the default settings. Open the ComfyUI directory and navigate to the ../input/ folder, from there you will create a new folder and upload your image data set there. If you create a new folder named "Test", the path should look like: ../user_data/comfyui/input/Test The default settings work very well – if you would like to add a trigger word for your LoRa, you can do that in the Trigger Word field. Once finished, a new URL will appear in your Preview Text node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser. Prepping your dataset Fewer, ultra‑high‑res (≈1024×1024+) images beat many low‑quality ones. Every image must clearly represent the style or individual and be artifact‑free. For people, aim for at least 10-20 images in different background (5 headshots, 5 wholebody, 5 halfbody, 5 in other scene) Captioning Give the style a unique trigger phrase (so as not be confused with a regular word or term). For better prompt control, add custom captions that describe content only—leave style cues to the trigger phrase. Create accompanying .txt files with the same name as the image its describing. If you do add custom captions, be sure to turn on is_style to skip auto‑captioning. It is set to off by default. Training steps The default is set to around 2000, but you can train multiple checkpoints (e.g., 500, 1000, 1500, 2000) and pick the one that balances style fidelity with prompt responsiveness. Too few steps: the character becomes less realistic or the style fades. Too many steps: model overfits and stops obeying prompts. Output Path Will be a URL in the Preview Text Node. Copy the URL and paste the path directly to your ../comfyui/models/LoRA/ by using the Upload by URL feature in the file browser.

SeedVR2 Upscale: Upscale Your Image to Extreme Clarity

Upscale

API

Image2Image

Floyo API

SeedVR Upscale

SeedVR2

SeedVR2 Upscale: Upscale Your Image to Extreme Clarity

MMAudio: Video to Synced Audio

Video to Video

MMaudio

Generate synchronized audio with a given video input. It can be combined with video models to get videos with audio.

MMAudio: Video to Synced Audio

Generate synchronized audio with a given video input. It can be combined with video models to get videos with audio.

Z-Image Turbo Controlnet 2.1 Image to Image

Controlnet

Image2Image

Photography

Z-Image-Turbo

Portrait

Z-Image Turbo Controlnet 2.1 Image to Image

Video to Video with Camera Control with Wan

[Video]

Adjust the camera angle of an existing video, like magic.

Video to Video with Camera Control with Wan

Adjust the camera angle of an existing video, like magic.

Z-Image Turbo - Text to Image w/ Optional Image Input (Image to Image)

Image to Image

Text to Image

Z-Image Turbo

Z-Image Turbo - Text to Image w/ Optional Image Input (Image to Image)

Image Redux with Flux

Image

Flux

Redux

Create variations of a given image, or restyle them. It can be used to refine, explore, or transform ideas and concepts. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: In pixels Prompt: as descriptive a prompt as possible Strength (step 5: value): Strength of redux model, play around with the value to increase or decrease the amount of variation

Image Redux with Flux

Create variations of a given image, or restyle them. It can be used to refine, explore, or transform ideas and concepts. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Width & height: In pixels Prompt: as descriptive a prompt as possible Strength (step 5: value): Strength of redux model, play around with the value to increase or decrease the amount of variation

Wan2.6 Image to Video

Image2Video

Animation

VFX

Film

Wan2.6

Wan2.6 Image to Video

Flux Image Upscaler with UltimateSD

Upscale

Image

Flux

UltimateSD

A simple workflow to enlarge & add detail to an existing image. Key Inputs Image: Use any JPG or PNG Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.

Flux Image Upscaler with UltimateSD

A simple workflow to enlarge & add detail to an existing image. Key Inputs Image: Use any JPG or PNG Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.

Image Inpainting with LoRA

Image

Inpaint

LoRa

Change specific details on just a portion of the image for inpainting or Erase & Replace, adding a LoRA for extra control.

Image Inpainting with LoRA

Change specific details on just a portion of the image for inpainting or Erase & Replace, adding a LoRA for extra control.

 Qwen Image Edit 2509 + Flux Krea for Creating Next Scene

Flux

Qwen

Qwen Image Edit 2509

Flux Krea

Photography

Filmography

Qwen Image Edit 2509 + Flux Krea for Creating Next Scene

Vertical Video FX Inserter - Qwen + Wan 2.1 FunControl

qwen

image-to-image

wan21-funcontrol

video-conditioning

reference-image

upscaling

fx-integration

Vertical Video FX Inserter - Qwen + Wan 2.1 FunControl

Kling Omni One Video to Video Edit

video to video

kling 2.5

film and animation

API

Floyo API

Omni One

Kling Omni One Video to Video Edit

Scribble to Image

Controlnet

SD1.5

Turn your scribbles into a beautiful image with only a drawing tool and a text prompt. Key Inputs Scribble: Create your scribble with the painting and design tools Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)

Scribble to Image

Turn your scribbles into a beautiful image with only a drawing tool and a text prompt. Key Inputs Scribble: Create your scribble with the painting and design tools Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)

FlashVSR Upscale Your Videos Instantly

Upscale

Video2Video

FlashVSR

FlashVSR Upscale Your Videos Instantly

 HunyuanVideo Foley: Create a Lifelike Sound

Video2Video

HunyuanVideo Foley

HunyuanVideo Foley: Create a Lifelike Sound

SeC Video Segmentation: Unleashing Adaptive, Semantic Object Tracking

Segmentation

Video2Video

SeC

SeC Video Segmentation: Unleashing Adaptive, Semantic Object Tracking

Vertical Video Light & Mood Shift

qwen

image-to-image

reference-image

wan2.1 FunControl

Vertical Video Light & Mood Shift

Image-to-Video with Reference Video (Prompt-Based Camera Rotation)

Wan2.2

image to video

9:16

reference video

camera rotation

pose control

DWpose

Image-to-Video with Reference Video (Prompt-Based Camera Rotation)

Vertical Video Character Face & Actor Swap (Wan 2.2 Animate)

masking

image to video

vertical video

Wan2.2 Animate

character replacement

character swap

Points Editor

WanAnimateToVideo

Vertical Video Character Face & Actor Swap (Wan 2.2 Animate)

Seedance I2V: Image to Video in Minutes

Image2Video

API

Floyo API

Seedance

Seedance I2V: Image to Video in Minutes

Image to Character Sheet

SDXL

Character Sheet

Image to Image

Generate a character sheet with multiple angles from a single input image as reference. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly. If you're trying to create a full body output, a full body input must be provided.

Image to Character Sheet

Generate a character sheet with multiple angles from a single input image as reference. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly. If you're trying to create a full body output, a full body input must be provided.

Start/End Frame Multi-Video via Floyo API

Image to Video

API

Compare between Luma Dream Machine and Kling Pro 1.6 via Fal API

Start/End Frame Multi-Video via Floyo API

Compare between Luma Dream Machine and Kling Pro 1.6 via Fal API

Text to Video and Wan with optional LoRA

LoRa

Wan2.1

Text2Video

Generate a high-quality video from a text prompt and add in a LoRA for extra control over character or style consistency. Key Inputs Prompt: as descriptive a prompt as possible Load LoRA: Load your reference model here Width & height: Optimal resolution settings are noted File Format: H.264 and more

Text to Video and Wan with optional LoRA

Generate a high-quality video from a text prompt and add in a LoRA for extra control over character or style consistency. Key Inputs Prompt: as descriptive a prompt as possible Load LoRA: Load your reference model here Width & height: Optimal resolution settings are noted File Format: H.264 and more

Wan2.1 Start & End Frame Image to Video

Image2Video

Wan2.1

Start and end frame

Used for image to video generation, defined by the first frame and end frame images.

Wan2.1 Start & End Frame Image to Video

Used for image to video generation, defined by the first frame and end frame images.

Wan 2.1 Text2Image

Wan2.1

text2image

Created by @yanokusnir on Reddit, please support the original creator! https://www.reddit.com/r/StableDiffusion/comments/1lu7nxx/wan_21_txt2img_is_amazing/ If this is your workflow, please contact us at team@floyo.ai to claim it! Original post from the creator: Hello. This may not be news to some of you, but Wan 2.1 can generate beautiful cinematic images. I was wondering how Wan would work if I generated only one frame, so to use it as a txt2img model. I am honestly shocked by the results. All the attached images were generated in fullHD (1920x1080px) and on my RTX 4080 graphics card (16GB VRAM) it took about 42s per image. I used the GGUF model Q5_K_S, but I also tried Q3_K_S and the quality was still great. The only postprocessing I did was adding film grain. It adds the right vibe to the images and it wouldn't be as good without it. Last thing: For the first 5 images I used sampler euler with beta scheluder - the images are beautiful with vibrant colors. For the last three I used ddim_uniform as the scheluder and as you can see they are different, but I like the look even though it is not as striking. :) Enjoy.

Wan 2.1 Text2Image

Created by @yanokusnir on Reddit, please support the original creator! https://www.reddit.com/r/StableDiffusion/comments/1lu7nxx/wan_21_txt2img_is_amazing/ If this is your workflow, please contact us at team@floyo.ai to claim it! Original post from the creator: Hello. This may not be news to some of you, but Wan 2.1 can generate beautiful cinematic images. I was wondering how Wan would work if I generated only one frame, so to use it as a txt2img model. I am honestly shocked by the results. All the attached images were generated in fullHD (1920x1080px) and on my RTX 4080 graphics card (16GB VRAM) it took about 42s per image. I used the GGUF model Q5_K_S, but I also tried Q3_K_S and the quality was still great. The only postprocessing I did was adding film grain. It adds the right vibe to the images and it wouldn't be as good without it. Last thing: For the first 5 images I used sampler euler with beta scheluder - the images are beautiful with vibrant colors. For the last three I used ddim_uniform as the scheluder and as you can see they are different, but I like the look even though it is not as striking. :) Enjoy.

Image to 3D with Hunyuan3D

Image to 3D

Hunyuan3D

Hunyuan 3D

Architecture

Game Development

Animation

3D View

Filmmaking

A simple workflow to create a detailed & textured 3D model from a reference image.

Image to 3D with Hunyuan3D

A simple workflow to create a detailed & textured 3D model from a reference image.

Nano Banana Pro for Multi Grid View of Product Ads

API

Image2Image

Nano Banana Pro

Ecommerce

Product Ads

Create grids of different angles for your ecommerce products.

Nano Banana Pro for Multi Grid View of Product Ads

Create grids of different angles for your ecommerce products.

Z-Image Turbo with Controlnet 2.1 and Qwen VLM for Creating Accurate Variety of Images

Controlnet

API

Image2Image

LoRA

Floyo API

Z-Image Turbo

Z-Image Turbo with Controlnet 2.1 and Qwen VLM for Creating Accurate Variety of Images

SAM3 Image Segmentation

Segmentation

Image2Image

SAM3

SAM3 Image Segmentation

FlatLogColor LoRA and Qwen Image Edit 2509

Qwen

Qwen Image Edit 2509

Photography

LoRA

FlatLogColor

FlatLogColor LoRA and Qwen Image Edit 2509

Flux 2 Text-to-Image Generation

Text2Image

Marketing

Photography

Flux2

Flux 2 Text-to-Image Generation

Vertical Video FX Insterter / Element Pass with Seedream + Wan

video-conditioning

reference-image

upscaling

seedream

wan2.1funControl

Vertical Video FX Insterter / Element Pass with Seedream + Wan

🔥Create Stunning 10 Second 3D Spin Shots in Seedance for Characters, Products, and Hero Scenes

3D

Image2Video

Floyo API

Seedance

3D Spin

Spin

Floyo

🔥Create Stunning 10 Second 3D Spin Shots in Seedance for Characters, Products, and Hero Scenes

Wan2.1 and ATI for Control Video Motion: Draw Your Path, Get Your Video

Image2Video

Wan

ATI

Wan2.1 and ATI for Control Video Motion: Draw Your Path, Get Your Video

Nano Banana Pro Text-to-Image: Gemini 3 Pro Image Model

API

Image2Image

Floyo API

Nano Banana Pro

Nano Banana Pro Text-to-Image: Gemini 3 Pro Image Model

Multi-Angle LoRA and Qwen Image Edit 2509: Unlocking Dynamic Camera Control for Your Images

Image2Image

Qwen

Qwen Image Edit 2509

Multiple Angles

Multi-Angle LoRA and Qwen Image Edit 2509: Unlocking Dynamic Camera Control for Your Images

SRPO Next-Gen Text-to-Image

Text2Image

SRPO

SRPO Next-Gen Text-to-Image

Chroma 1 Radiance Text to Image

Text2Image

Chrome1 Radiance

Macro Photography

Chroma 1 Radiance Text to Image

AniSora 3.2 and Wan2.2: Best Practices for Generating Smooth Character 3D Spin

3D

Image2Video

AniSora

3D Spin

Character Spin

AniSora 3.2 and Wan2.2: Best Practices for Generating Smooth Character 3D Spin

Wan2.2 Fun Camera for Camera Control

Image2Video

Wan2.2

Camera Control

Wan2.2 Fun Camera for Camera Control

Wan2.1 FusionX and MultiTalk - Image to Video

Wan2.1

Lipsync

Image to Video

Animation

Filmmaking

Multitalk

Marketing

Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input. MultiTalk handles lip movements, facial expressions, and body motion automatically.

Wan2.1 FusionX and MultiTalk - Image to Video

Turn any portrait - artwork, photos, or digital characters - into speaking, expressive videos that sync perfectly with audio input. MultiTalk handles lip movements, facial expressions, and body motion automatically.

Text2Video with LTX

LTX

Text2Video

Quick video generation from prompts. Trading quality for speed for fast pre-viz and ideation. Key Inputs Prompt: Write only a simple prompt, the prompt enhancer will elaborate on it. File Format: H.264 and more Width & height: Optimal resolution settings are noted as maximum resolution is 768x512. Make sure not to exceed that.

Text2Video with LTX

Quick video generation from prompts. Trading quality for speed for fast pre-viz and ideation. Key Inputs Prompt: Write only a simple prompt, the prompt enhancer will elaborate on it. File Format: H.264 and more Width & height: Optimal resolution settings are noted as maximum resolution is 768x512. Make sure not to exceed that.

Text to Video + Hunyuan LoRA

LoRa

Hunyuan

Text2Video

Integrate a custom model with your text prompt to create a video with a consistent character, style or element. Key Inputs Prompt: as descriptive a prompt as possible. Make sure to include the trigger word from your LoRA below Load LoRA: Load your reference model here Width & height: resolution settings are noted in pixels Guidance strength (CFG): Higher numbers adhere more to the prompt Flow Shift: For temporal consistency, adjust to tweak video smoothness.

Text to Video + Hunyuan LoRA

Integrate a custom model with your text prompt to create a video with a consistent character, style or element. Key Inputs Prompt: as descriptive a prompt as possible. Make sure to include the trigger word from your LoRA below Load LoRA: Load your reference model here Width & height: resolution settings are noted in pixels Guidance strength (CFG): Higher numbers adhere more to the prompt Flow Shift: For temporal consistency, adjust to tweak video smoothness.

Simple Self-Forcing Wan1.3B+Vace workflow

Video

Wan

Vace

Created by @davcha on Civitai, please support the original creator! https://civitai.com/models/1674121/simple-self-forcing-wan13bvace-workflow If this is your workflow, please contact us at team@floyo.ai to claim it! Original guide from creator: This is a very simple workflow to run Self-Forcing Wan 1.3B + Vace, it only uses a single custom node, which everyone making videos should have: Kosinkadink/ComfyUI-VideoHelperSuite. Everything else is pure comfy core. You'll need to download the model of your choice from here lym00/Wan2.1-T2V-1.3B-Self-Forcing-VACE · Hugging Face, and put it inside your /path/to/models/diffusion_models folder. This workflow can be used as a very good start for experimenting. You can refer to this [2503.07598] VACE: All-in-One Video Creation and Editing for how to use Vace. You don't need to read the paper of course, the information you are interested in is mostly at the top of page 7, which I reproduce in the following: Basically, in the WanVaceToVideo node, you have 3 optional inputs: control_video, control_masks, and reference_image. control_video and control_masks are a little bit misleading. You don't have to provide a full video. You can in fact provide a variety of things to obtain various effects. For example: if you provide a single image, it's basically more or less equivalent to image2video. if you provide a sequence of images separated by empty images: img1, black, black, black, img2, black, black, black, img3, etc... then it's equivalent to interpolating all these img, filling the blacks. A special case of this one to make it clear is if you have img1, black, black, ..., black, img2, then it's equivalent to start_img, end_img to video. control_masks control where Wan should paint. Basically if wherever the mask is 1, the original image will be kept. So you can for example pad and/or mask an input image, like this: and use that image and mask as control_video and control_mask, and you'll basically do a image2video inpaint and outpaint. If you input a video in control_video, then you can control where the changes should happen in the same way, using control_mask. You'll need to set one mask per frame in the video. if you input an image preprocessed with openpose or a depthmap, you can finely control the movement in the video output. reference_image node is basically an image that you feed to Wan+Vace that serves as a reference point. For example, if you put the image of someone's face here, there's a good chance you'll get a video with that person's face.

Simple Self-Forcing Wan1.3B+Vace workflow

Created by @davcha on Civitai, please support the original creator! https://civitai.com/models/1674121/simple-self-forcing-wan13bvace-workflow If this is your workflow, please contact us at team@floyo.ai to claim it! Original guide from creator: This is a very simple workflow to run Self-Forcing Wan 1.3B + Vace, it only uses a single custom node, which everyone making videos should have: Kosinkadink/ComfyUI-VideoHelperSuite. Everything else is pure comfy core. You'll need to download the model of your choice from here lym00/Wan2.1-T2V-1.3B-Self-Forcing-VACE · Hugging Face, and put it inside your /path/to/models/diffusion_models folder. This workflow can be used as a very good start for experimenting. You can refer to this [2503.07598] VACE: All-in-One Video Creation and Editing for how to use Vace. You don't need to read the paper of course, the information you are interested in is mostly at the top of page 7, which I reproduce in the following: Basically, in the WanVaceToVideo node, you have 3 optional inputs: control_video, control_masks, and reference_image. control_video and control_masks are a little bit misleading. You don't have to provide a full video. You can in fact provide a variety of things to obtain various effects. For example: if you provide a single image, it's basically more or less equivalent to image2video. if you provide a sequence of images separated by empty images: img1, black, black, black, img2, black, black, black, img3, etc... then it's equivalent to interpolating all these img, filling the blacks. A special case of this one to make it clear is if you have img1, black, black, ..., black, img2, then it's equivalent to start_img, end_img to video. control_masks control where Wan should paint. Basically if wherever the mask is 1, the original image will be kept. So you can for example pad and/or mask an input image, like this: and use that image and mask as control_video and control_mask, and you'll basically do a image2video inpaint and outpaint. If you input a video in control_video, then you can control where the changes should happen in the same way, using control_mask. You'll need to set one mask per frame in the video. if you input an image preprocessed with openpose or a depthmap, you can finely control the movement in the video output. reference_image node is basically an image that you feed to Wan+Vace that serves as a reference point. For example, if you put the image of someone's face here, there's a good chance you'll get a video with that person's face.

Sketch to Image

Controlnet

SD1.5

Turn your sketches into full blown scenes. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Width & height: In pixels ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)

Sketch to Image

Turn your sketches into full blown scenes. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible Width & height: In pixels ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.)

Image Upscaler with LoRA

Upscale

Image

LoRa

Flux

Create a larger more detailed image along with an extra AI model for fine tuned guidance. Key Inputs Load Image: Use any JPG or PNG showing your subject clearly Load LoRA: Load your reference model here Prompt: as descriptive a prompt as possible Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.

Image Upscaler with LoRA

Create a larger more detailed image along with an extra AI model for fine tuned guidance. Key Inputs Load Image: Use any JPG or PNG showing your subject clearly Load LoRA: Load your reference model here Prompt: as descriptive a prompt as possible Upscale by: The factor of magnification Denoise: The amount of variance in the new image. Higher has more variance.

Flux Fill Dev Image Outpainting

Image

Flux

Outpaint

Extend your images out for a wider field of view or just to see more of your subject. Expand compositions, change aspect ratios, or add creative elements while maintaining consistency in style, lighting, and detail while seamlessly blending with the existing artwork. Enhance visuals, create immersive scenes, and repurpose images for different formats without losing their original essence. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible to describe the area you want to extend out to Left, Right, Top, Bottom: Amount of extension in pixels Feathering: Amount of radius around the original image in pixels that the AI generated outpainting will blend with the original

Flux Fill Dev Image Outpainting

Extend your images out for a wider field of view or just to see more of your subject. Expand compositions, change aspect ratios, or add creative elements while maintaining consistency in style, lighting, and detail while seamlessly blending with the existing artwork. Enhance visuals, create immersive scenes, and repurpose images for different formats without losing their original essence. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Prompt: as descriptive a prompt as possible to describe the area you want to extend out to Left, Right, Top, Bottom: Amount of extension in pixels Feathering: Amount of radius around the original image in pixels that the AI generated outpainting will blend with the original

Image Inpainting

Image

Inpaint

Flux

Change specific details on just a portion of the image, sometimes known as inpainting or Erase & Replace. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Masking tools: Right-click to reveal the masking tool option, and create a mask of the desired area to inpaint Prompt: as descriptive a prompt as possible to help guide what you would like replaced in the masked area

Image Inpainting

Change specific details on just a portion of the image, sometimes known as inpainting or Erase & Replace. Key Inputs Image reference: Use any JPG or PNG showing your subject clearly Masking tools: Right-click to reveal the masking tool option, and create a mask of the desired area to inpaint Prompt: as descriptive a prompt as possible to help guide what you would like replaced in the masked area

Video Masking with Sam2

Video

Masking

Segmentation

Use a video clip and visual markers to segment/create masks of the subject or the inverse. Key Inputs Load Video: Use any Mp4 that you would like to segment or create a mask from Select subject: Use 3 green selectors to identify your subject and one red selector to identify the space outside your subject Modify markers: Shift+Click to add markers, Shift+Right Click to remove markers

Video Masking with Sam2

Use a video clip and visual markers to segment/create masks of the subject or the inverse. Key Inputs Load Video: Use any Mp4 that you would like to segment or create a mask from Select subject: Use 3 green selectors to identify your subject and one red selector to identify the space outside your subject Modify markers: Shift+Click to add markers, Shift+Right Click to remove markers

Modify the Image using InstantX Union ControlNet

Controlnet

Image2Image

Qwen

InstantX Union Controlnet

Modify the Image using InstantX Union ControlNet

SeedVR2 and TTP Toolset 8k Image Upscale

Upscale

Image2Image

SeedVR2

8k

SeedVR2 and TTP Toolset 8k Image Upscale

Hunyuan Video LoRA Trainer

Hunyuan

API

LoRA

Trainer

Hunyuan Video LoRA Trainer

3D Print LoRA and Flux Kontext for Image to 3D Print Mockup

Flux Kontext

3D Print

Mockup

Imageto3D

3D Print LoRA and Flux Kontext for Image to 3D Print Mockup

Change Product Shots in Ecommerce Ads with NanoBanana Pro

Image to Image

Reference image

NanoBanana

Ecommerce

Change Product Shots in Ecommerce Ads with NanoBanana Pro

Insert Products in Ecommerce Ads with NanoBanana Pro

Image to Image

NanoBanana

Reference Image

Ecommerce

Insert Products in Ecommerce Ads with NanoBanana Pro

Qwen Image Edit 2511 Restore Damage Old Photograph

Image2Image

Qwen

Qwen Image Edit 2511

Qwen Image Edit 2511 Restore Damage Old Photograph

Wan2.1 + SCAIL for Animating Images for Movement

Image2Video

Wan

SCAIL

Wan2.1 + SCAIL for Animating Images for Movement

Wan2.1 + WanMOVE for Animating Movement using Trajectory Path

Image2Video

Wan2.1

Animation

Wan Move

Wan2.1 + WanMOVE for Animating Movement using Trajectory Path

360 Degree Product Video Using One Image Using Nano Banana Pro & Veo2

Image to Video

NanoBanana

Veo2

360 Degree Product Video Using One Image Using Nano Banana Pro & Veo2

Amazon Bedrock - Text to Multi-Image with SDXL, Titan and Nova Canvas

SDXL

API

Text to Image

Bedrock

Nova Canvas

Titan

Generate and compare images between 3 different models powered by Amazon Bedrock. Key Inputs Prompt: as descriptive a prompt as possible Models SDXL: Solid all-around performer with strong prompt adherence and wide style range Titan: Versatile model with built-in editing features and customization flexibility Nova Canvas: Quick iterations with creative flair, ideal for brainstorming and concept exploration

Amazon Bedrock - Text to Multi-Image with SDXL, Titan and Nova Canvas

Generate and compare images between 3 different models powered by Amazon Bedrock. Key Inputs Prompt: as descriptive a prompt as possible Models SDXL: Solid all-around performer with strong prompt adherence and wide style range Titan: Versatile model with built-in editing features and customization flexibility Nova Canvas: Quick iterations with creative flair, ideal for brainstorming and concept exploration

Qwen Image 2512 Text to Image

Text2Image

Qwen

Photography

Qwen Image 2512

Qwen Image 2512 Text to Image

LTX 2 Pro for Image to Video

Image2Video

Animation

Filmmaking

Video Editing

LTX 2 Pro

LTX 2 Pro for Image to Video

Wan2.1 and FantasyTalking - Image2Video Lipsync

Image2Video

Wan2.1

FantasyTalking

Lipsync

Create high quality lipsync video from image inputs with Wan2.1 FantasyTalking Key Inputs Load Image: Select an image of a person with their face in clear view Load Audio: Choose audio file Frames: How many frames generated

Wan2.1 and FantasyTalking - Image2Video Lipsync

Create high quality lipsync video from image inputs with Wan2.1 FantasyTalking Key Inputs Load Image: Select an image of a person with their face in clear view Load Audio: Choose audio file Frames: How many frames generated

Image to Video with Multiframe Control

Video

LTX

Image2Video

Used for image to video generation, including first frame, end frame, or other multiple key frames. Key Inputs Load Image (Start Frame): Use any JPG or PNG showing your subject clearly to start your video Load Image (End Frame): Use any JPG or PNG showing your subject clearly to act as the last part of your video. Make sure it's the same resolution as the load image. Width & height: Optimal resolution settings are noted. LTX maximum resolution is 768x512 Prompt: as descriptive a prompt as possible

Image to Video with Multiframe Control

Used for image to video generation, including first frame, end frame, or other multiple key frames. Key Inputs Load Image (Start Frame): Use any JPG or PNG showing your subject clearly to start your video Load Image (End Frame): Use any JPG or PNG showing your subject clearly to act as the last part of your video. Make sure it's the same resolution as the load image. Width & height: Optimal resolution settings are noted. LTX maximum resolution is 768x512 Prompt: as descriptive a prompt as possible

Text to Character Sheet with a reference LoRA

Controlnet

Flux

Character Sheet

Generate a character sheet using a prompt and a LoRA model of a particular person for more accurate renders. Key Inputs Load Image: Use any JPG or PNG of your pose sheet Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1280px x 1280px Denoise: The amount of variance in the new image. Higher has more variance. ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.) Flux Guidance: How much influence the prompt has over the image. Higher has more guidance.

Text to Character Sheet with a reference LoRA

Generate a character sheet using a prompt and a LoRA model of a particular person for more accurate renders. Key Inputs Load Image: Use any JPG or PNG of your pose sheet Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1280px x 1280px Denoise: The amount of variance in the new image. Higher has more variance. ControlNet Strength: The amount of adherence to the original image. Higher has more adherence. Start Percent: The point in the generation process where the control starts exerting influence. (Have it start later, to let AI imagine first.) End Percent: The point in the generation process where the control stops exerting influence. (Have it end sooner, to let AI finish it off with some variation.) Flux Guidance: How much influence the prompt has over the image. Higher has more guidance.

Character + Outfit → High-End Editorial Shoot

character-to-photoshoot

studio-photoshoot

Nano banana

Character + Outfit → High-End Editorial Shoot

Create Images Using Qwen Image Edit 2511

Image to Image

Reference Image

Qwen2511

Create Images Using Qwen Image Edit 2511

Vertical Video Scene Extension & Coverage Generator

wan2.2

qwen

reference-image

first-last frame

Vertical Video Scene Extension & Coverage Generator

Image to 3D with Hunyuan3D w/ Texture Upscale

3D

Flux

Image to 3D

Upscaling

Hunyuan 3D

Architecture

Game Development

Animation

Create a 3D model from a reference image with Flux Dev texture upscaling. Key Inputs Image: Use any JPG or PNG. Load the image you want to generate a 3D asset from, if it has a background this workflow will remove it and center the subject. Prompt: as descriptive a prompt as possible Denoise: The amount of variance in the new image. Higher has more variance. Notes: If you aren’t satisfied with the initial mesh, simply cancel the workflow generation process, preferably before the process reaches the SamplerCustomAdvanced node because applying the textures to the model may take a little bit more time, and you’ll be unable to cancel the generation during that time. The seed is fixed for the mesh generation, this is so if you need to retry the texture upscale you don't need to also re-generate the mesh. If you would like to try a different seed for a better mesh, simply expand the node below and change the seed to another random number. Changing the seed could help in some cases, but ultimately the biggest factor is the input image. If the first mesh isn't showing, give it a moment, there are some additional post processing steps going on in the background for de-light/multiview.

Image to 3D with Hunyuan3D w/ Texture Upscale

Create a 3D model from a reference image with Flux Dev texture upscaling. Key Inputs Image: Use any JPG or PNG. Load the image you want to generate a 3D asset from, if it has a background this workflow will remove it and center the subject. Prompt: as descriptive a prompt as possible Denoise: The amount of variance in the new image. Higher has more variance. Notes: If you aren’t satisfied with the initial mesh, simply cancel the workflow generation process, preferably before the process reaches the SamplerCustomAdvanced node because applying the textures to the model may take a little bit more time, and you’ll be unable to cancel the generation during that time. The seed is fixed for the mesh generation, this is so if you need to retry the texture upscale you don't need to also re-generate the mesh. If you would like to try a different seed for a better mesh, simply expand the node below and change the seed to another random number. Changing the seed could help in some cases, but ultimately the biggest factor is the input image. If the first mesh isn't showing, give it a moment, there are some additional post processing steps going on in the background for de-light/multiview.

Image to Video with Seedance Pro API

film

animation

Image to Video with Seedance Pro API

Topaz Video Upscaler: Turn Everyday Videos into Crisp, High-Resolution Content for Any Platform

Video2Video

API

Floyo API

Topaz

Video Upscale

Topaz Video Upscaler: Turn Everyday Videos into Crisp, High-Resolution Content for Any Platform

Wan Alpha Create Transparent Videos

Wan

Alpha

Transparent

VFX

Text to Video

Video Editing

Wan Alpha Create Transparent Videos

LTX 2 Fast for Text to Video

API

Filmmaking

Floyo API

Filmography

LTX 2 Fast

LTX 2 Fast for Text to Video

Veo 3.1 Image to Video - First Frame and Optional Last Frame

Image2Video

API

Floyo API

Veo 3.1

Veo 3.1 Image to Video - First Frame and Optional Last Frame

Vertical Video Scene Extension & Coverage Generator using Seedream +Wan

wan2.2

first-last frame

Seedream

reference image

Vertical Video Scene Extension & Coverage Generator using Seedream +Wan

LTX 2 Fast for Image to Video

Image2Video

API

Floyo API

Filmography

LTX 2 Fast

Fimmaking

LTX 2 Fast for Image to Video

Wan2.1 VACE and Ditto: Artistic Makeovers for Videos

Wan

Video2Video

VACE

Ditto

Wan2.1 VACE and Ditto: Artistic Makeovers for Videos

Qwen Image Edit 2509 and Grayscale to Color LoRA

Marketing

Ecommerce

3D Render

Product

Qwen Image Edit 2509 and Grayscale to Color LoRA

Realistic Product or Props Replacement

wan2.2

Animate

Realistic

Qwen_2509

Realistic Product or Props Replacement

Kling Master 2.0 Create Engaging Video Content

Image2Video

API

Floyo API

Kling

Kling Master 2.0

Kling Master 2.0 Create Engaging Video Content

Next-Level Motion from Images: MiniMax for Immersive Video Generation

Image2Video

API

Floyo API

Minimax

Next-Level Motion from Images: MiniMax for Immersive Video Generation

Light Restoration LoRA + Qwen Image Edit 2509 Image to Image

Image2Image

Qwen

Qwen Image Edit 2509

Light Restoration

Light Restoration LoRA + Qwen Image Edit 2509 Image to Image

Boost Your Creative Video: Comprehensive Solutions with Seedance Image to Video

Image2Video

API

Floyo API

Seedance

Boost Your Creative Video: Comprehensive Solutions with Seedance Image to Video

Multiple Angle Lighting LoRA + Qwen Image Edit 2509

Image2Image

LoRA

Qwen Image Edit 2509

Multiple Angle Lighting

Lighting

Image Editing

Multiple Angle Lighting LoRA + Qwen Image Edit 2509

Remix, Restyle, Reimagine using the SeedEdit 4.0

API

Image2Image

Floyo API

SeedEdit

Remix, Restyle, Reimagine using the SeedEdit 4.0

DyPe and Z-Image Turbo for High Quality Text to Image

Z-Image Turbo

Photography

Portrait

DyPE

DyPe and Z-Image Turbo for High Quality Text to Image

 Seedance Text to Video: Create Stunning 1080p Videos Instantly

Text2Video

API

Floyo API

Seedance

Seedance Text to Video: Create Stunning 1080p Videos Instantly

Craft Stunning Edits Instantly with Nano Banana Edit

API

Image2Image

Nano Banana Edit

Craft Stunning Edits Instantly with Nano Banana Edit

Multiple Angle LoRA and Qwen Image Edit 2509 for Dynamic Camera View

Image2Image

LoRA

Qwen

Qwen Image Edit 2509

Camera Control

Multiple Angle LoRA and Qwen Image Edit 2509 for Dynamic Camera View

Z-Image Turbo + DyPE + SeedVR2 2.5 + TTP  16k reso

Upscale

Text2Image

Z-Image Turbo

SeedVR2

8k

DyPE

16k

Z-Image Turbo + DyPE + SeedVR2 2.5 + TTP 16k reso

Wan2.2 Fun and RealismBoost LoRA for V2V

Wan

Video2Video

LoRA

Enhancer

Wan2.2 Fun and RealismBoost LoRA for V2V

LTX 2 Retake Video for Video Editing

Video2Video

API

Floyo API

Video Editing

LTX 2 Retake

LTX 2 Retake Video for Video Editing

Veo 3.1 Image to Video

Image2Video

API

Floyo API

Veo 3.1

Veo 3.1 Image to Video

VibeVoice Text to Speech Single Speaker

TTS

VibeVoice

VibeVoice Text to Speech Single Speaker

VibeVoice Text to Speech Multi Speaker

TTS

VibeVoice

Multi Speaker

VibeVoice Text to Speech Multi Speaker

ElevenLabs Text to Speech

API

Floyo API

ElevenLabs

TTS

ElevenLabs Text to Speech

Vertical Video Background & Scene Rebuild

Upscale

Image to Image

Reactor

Qwen

Wan2.1 FunControl

Vertical Video Background & Scene Rebuild

Vertical Video Prop & Object Replacement Using Seedream + Wan 2.2

Wan2.2

Seedream

Image to image

Reference Video

Vertical Video Prop & Object Replacement Using Seedream + Wan 2.2

Wan LoRA Trainer

Wan

Training

LoRA

API

Floyo

Wan LoRA Trainer

Flux LoRA Trainer

Flux

API

LoRA

Trainer

Flux LoRA Trainer

Krea Wan 14B Video to Video

Wan

Video2Video

API

VFX

Floyo

Krean

Videography

Krea Wan 14B Video to Video

Dreamina 3.1 Text to Image

Text2Image

Marketing

Floyo API

Floyo

Photography

Dreamina

Dreamina 3.1 Text to Image

Kling 2.5 Image to Video

Image2Video

API

Animation

Floyo API

Floyo

Kling 2.5

Filmography

Kling 2.5 Image to Video

Chatterbox Text to Speech

TTS

Chatterbox

Text to speech workflow using Chatterbox

Chatterbox Text to Speech

Text to speech workflow using Chatterbox

MiniMax Text-to-Video will Bring Your Creative Concepts to Life with Realistic Motion

Text2Video

API

Floyo API

Minimax

MiniMax Text-to-Video will Bring Your Creative Concepts to Life with Realistic Motion

Wan2.2 and Bullet Time LoRA: Transform Static Shots into Product Spins

Image2Video

LoRA

Product Demo

Bullet Time

Ecommerce

Wan2.2 and Bullet Time LoRA: Transform Static Shots into Product Spins

HunyuanVideo 1.5 for Image to Video

Image2Video

Animation

Filmmaking

HunyuanVideo 1.5

HunyuanVideo 1.5 for Image to Video

Ovi: Create a Talking Portrait

Image2Video

Lip Sync

Ovi

Ovi: Create a Talking Portrait

Multi Model for Voice Convesion and Text to Speech

TTS

VibeVoice

Higgs

ChatterBox

Text to Speech

A workflow of TTS Audio Suite which can to use different type of audio models.

Multi Model for Voice Convesion and Text to Speech

A workflow of TTS Audio Suite which can to use different type of audio models.

Create Cinematic Poster & Ad from Your Product

Seedream

product-ad

poster-design

VLM

Create Cinematic Poster & Ad from Your Product

Enjoy Effortless Image-to-Image Transformation to Jaw-Dropping Photorealism using Anime2Reality LoRA

Image2Image

LoRA

Qwen

Qwen Image Edit 2509

Enjoy Effortless Image-to-Image Transformation to Jaw-Dropping Photorealism using Anime2Reality LoRA

Image to Image Character Sheet Face Swap with Ace+

Image

Flux

Character Sheet

Face Swap

Take a character sheet and use a reference image to replace all the faces with that new person. Key Inputs Load Image: Use any JPG or PNG showing your pose sheet Load New Face: Use any JPG or PNG showing your subject clearly that you would like to swap into the pose sheet. Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1024px x 1024px Keep Proportion: Enable keep_proportion if you want to keep the same size with input and output Denoise: The amount of variance in the new image. Higher has more variance.

Image to Image Character Sheet Face Swap with Ace+

Take a character sheet and use a reference image to replace all the faces with that new person. Key Inputs Load Image: Use any JPG or PNG showing your pose sheet Load New Face: Use any JPG or PNG showing your subject clearly that you would like to swap into the pose sheet. Prompt: as descriptive a prompt as possible Width & height: Optimal resolution settings are noted at 1024px x 1024px Keep Proportion: Enable keep_proportion if you want to keep the same size with input and output Denoise: The amount of variance in the new image. Higher has more variance.

 Create a Fashion Shoot (Character + Outfit + Object) - NanoBanana + Kling

Image to Video

Ecommerce

Multi Reference Image

Create a Fashion Shoot (Character + Outfit + Object) - NanoBanana + Kling

Ovis Text to Image

Text2Image

Ovis

Typography

Ovis Text to Image

HunyuanImage 3.0 Text to Image

Text2Image

API

Floyo API

HunyuanImage 3.0

HunyuanImage 3.0 Text to Image

Wan 2.6 Video Generation

Wan

API

Marketing

Floyo

Ecommerce

Film

Wan 2.6 Video Generation

Kling Omni One Image Edit

Image2Image

Image Editing

Kling Omni One

Kling Omni One Image Edit

Kling Omni One Image to Video

Image2Video

Kling

Omni One

Kling Omni One Image to Video

Kling Omni 1 Reference to Video

API

Floyo

Kling Omni One

Reference2Vid

Kling Omni 1 Reference to Video

Nano Banana Pro Edit Image to Image

Image2Image

Image Editing

Nano Banana Pro Edit

Nano Banana Pro Edit Image to Image

Wan2.6 Text to Video

Text2Video

Animation

Film

Wan2.6

Wan2.6 Text to Video

GPT Image 1.5  Text to Image

Text2Image

GPT Image 1.5

ECommerce

GPT Image 1.5 Text to Image

GPT Image 1.5 for Image Editing

Image2Image

Image Editing

GPT Image 1.5

GPT Image 1.5 for Image Editing

Vertical Video Lighting & Mood Shift Using Seedream + Wan

Lipsync

image-to-image

reference-image

upscaling

Video-conditioning

wan2.1_funControl

seedream

Vertical Video Lighting & Mood Shift Using Seedream + Wan

Create Photorealistic Packaging from Dielines

product-packaging

packaging-materials

Nano banana

Create Photorealistic Packaging from Dielines

Seedream 4.5 Image Generation

API

Floyo API

Seedream 4.5

Fashion

Marketing

Product Demo

Ecommerce

Seedream 4.5 Image Generation

Studio Relighting for Composited Products

studio relighting

product lighting

lightning lora

relight composite

Studio Relighting for Composited Products

LTX 2 Pro for Text to Video

API

Filmmaking

Floyo API

Videography

Text to Video

LTX 2 Pro

LTX 2 Pro for Text to Video