floyo logo
Powered by
ThinkDiffusion
Webinar: Qwen 2511 for Multi Angle & Relighting w Sebastian Kamph. Sign up now 👉🏽
floyo logo
Powered by
ThinkDiffusion
Webinar: Qwen 2511 for Multi Angle & Relighting w Sebastian Kamph. Sign up now 👉🏽

Image Edit Flux Klein 9B Distilled

51

🔧 Workflow Description — Flux.2 Klein 9B Image Editing (Realistic Environment Transformation)

This workflow is designed to perform controlled, photorealistic image edits using FLUX.2 Klein 9B (Distilled) inside ComfyUI.
Its core goal is to transform the atmosphere and condition of a real photograph (e.g. flood, fire aftermath, post-crisis) without breaking scene structure or realism.


1️⃣ Input & Image Preparation

  • The original photograph is loaded via LoadImage.

  • The image is automatically resized using ImageScaleToTotalPixels, ensuring compatibility with the Flux latent resolution while preserving aspect ratio.

  • GetImageSize extracts width and height dynamically so the generation always matches the source image exactly.

Key outcome:
✔ Composition, perspective, and object placement are preserved.


2️⃣ Model Stack (Flux-Optimized)

The workflow uses a fully native Flux 2 stack:

ComponentModelUNetflux-2-klein-9bText Encoderqwen_3_8b_fp8mixedVAEflux2-vae

These are loaded explicitly via UNETLoader, CLIPLoader, and VAELoader, ensuring deterministic and stable behavior across runs.


3️⃣ Prompt & Conditioning Logic

  • The positive prompt uses natural-language instructions such as:
    “Make it look like the area has recently burned, leaving scorched surfaces and lingering smoke.”

  • This text is encoded through CLIPTextEncode.

  • A ConditioningZeroOut node neutralizes unwanted negative bias, keeping edits subtle and grounded.

Reference Conditioning (Critical Detail)

The workflow uses a Reference Conditioning subgraph:

  • The original image is encoded into latent space.

  • That latent is injected back into both positive and negative conditioning via ReferenceLatent nodes.

Why this matters:
✔ Scene geometry, object identity, and layout remain intact
✔ Only state and atmosphere are modified


4️⃣ Sampling Strategy

  • Euler sampler is selected for stability and realism.

  • Flux2Scheduler controls sigma progression, tuned for low-to-mid denoise behavior.

  • CFGGuider is intentionally kept low (≈1), preventing over-stylization.

Noise is introduced via RandomNoise, allowing variation while keeping structure locked.


5️⃣ Latent Generation & Decode

  • An EmptyFlux2LatentImage initializes the latent canvas using the source image resolution.

  • Sampling is executed through SamplerCustomAdvanced.

  • Final output is decoded with VAEDecode and saved via SaveImage.


🎯 What This Workflow Is Optimized For

✔ Realistic disaster / aftermath scenarios
✔ Environmental storytelling (flood, fire, abandonment, crisis)
✔ Image-to-image edits that respect reality
✔ Cinematic but non-fantastical transformations

❌ Not intended for fantasy, sci-fi, or heavy hallucination


🧠 Conceptual Summary

This is a scene-preserving, state-transforming workflow.
It does not repaint the world — it changes what happened to it.

Read more

N
Generates in about -- secs

Nodes & Models

LoadImage
MarkdownNote
SaveImage

🔧 Workflow Description — Flux.2 Klein 9B Image Editing (Realistic Environment Transformation)

This workflow is designed to perform controlled, photorealistic image edits using FLUX.2 Klein 9B (Distilled) inside ComfyUI.
Its core goal is to transform the atmosphere and condition of a real photograph (e.g. flood, fire aftermath, post-crisis) without breaking scene structure or realism.


1️⃣ Input & Image Preparation

  • The original photograph is loaded via LoadImage.

  • The image is automatically resized using ImageScaleToTotalPixels, ensuring compatibility with the Flux latent resolution while preserving aspect ratio.

  • GetImageSize extracts width and height dynamically so the generation always matches the source image exactly.

Key outcome:
✔ Composition, perspective, and object placement are preserved.


2️⃣ Model Stack (Flux-Optimized)

The workflow uses a fully native Flux 2 stack:

ComponentModelUNetflux-2-klein-9bText Encoderqwen_3_8b_fp8mixedVAEflux2-vae

These are loaded explicitly via UNETLoader, CLIPLoader, and VAELoader, ensuring deterministic and stable behavior across runs.


3️⃣ Prompt & Conditioning Logic

  • The positive prompt uses natural-language instructions such as:
    “Make it look like the area has recently burned, leaving scorched surfaces and lingering smoke.”

  • This text is encoded through CLIPTextEncode.

  • A ConditioningZeroOut node neutralizes unwanted negative bias, keeping edits subtle and grounded.

Reference Conditioning (Critical Detail)

The workflow uses a Reference Conditioning subgraph:

  • The original image is encoded into latent space.

  • That latent is injected back into both positive and negative conditioning via ReferenceLatent nodes.

Why this matters:
✔ Scene geometry, object identity, and layout remain intact
✔ Only state and atmosphere are modified


4️⃣ Sampling Strategy

  • Euler sampler is selected for stability and realism.

  • Flux2Scheduler controls sigma progression, tuned for low-to-mid denoise behavior.

  • CFGGuider is intentionally kept low (≈1), preventing over-stylization.

Noise is introduced via RandomNoise, allowing variation while keeping structure locked.


5️⃣ Latent Generation & Decode

  • An EmptyFlux2LatentImage initializes the latent canvas using the source image resolution.

  • Sampling is executed through SamplerCustomAdvanced.

  • Final output is decoded with VAEDecode and saved via SaveImage.


🎯 What This Workflow Is Optimized For

✔ Realistic disaster / aftermath scenarios
✔ Environmental storytelling (flood, fire, abandonment, crisis)
✔ Image-to-image edits that respect reality
✔ Cinematic but non-fantastical transformations

❌ Not intended for fantasy, sci-fi, or heavy hallucination


🧠 Conceptual Summary

This is a scene-preserving, state-transforming workflow.
It does not repaint the world — it changes what happened to it.

Read more

N