floyo logo
Pricing
Create with Alibaba Happy Horse model now! Try here 👉
floyo logo
Pricing
Create with Alibaba Happy Horse model now! Try here 👉
Run Anima now on Floyo hero

COMMUNITY PAGE

Run Anima now on Floyo

Home / Model / Anima on Floyo

AI IMAGE GENERATION

Run Anima on Floyo

A 2 billion parameter text-to-image model built for anime, illustration, and fantasy art. Built on NVIDIA's Cosmos-Predict2 architecture. Supports Danbooru tags, natural language, or a mix of both.

Run CircleStone Labs and Comfy Org's Anima through ComfyUI in your browser. No API key, no installs, no local GPU.

Parameters

2B

Best Resolution

1024x1024 (1MP)

Generation Time

~30 seconds

Specialty

Anime + illustration

Try Anima Now → Browse All Models

No installation. Runs in browser. Updated April 2026.

What you get?

Anima is a 2 billion parameter text-to-image model from CircleStone Labs and Comfy Org, built on NVIDIA's Cosmos-Predict2 architecture. It specializes in anime illustrations, character design, concept art, and non-photorealistic artistic content. Trained on several million anime images and ~800k artistic works with no synthetic data. Supports Danbooru-style tags, natural language prompts, or a combination of both. Lightweight at 7GB unquantized. The current release is the Preview 3 checkpoint. Available as a ComfyUI workflow on Floyo.

ANIMA WORKFLOWS ON FLOYO

Anima Preview 3 - Text to Image

What is Anima?

Anima is a 2 billion parameter text-to-image model from CircleStone Labs, built in collaboration with Comfy Org. It is built on NVIDIA's Cosmos-Predict2 architecture, pairing the diffusion transformer with a Qwen-3 0.6B text encoder and the Qwen Image VAE. The current release is the Preview 3 checkpoint. Training data has a cutoff of September 2025.

What makes Anima different from most anime models is its lineage. Popular anime models like NoobAI, Illustrious, and Animagine are SDXL-based derivatives. Anima is the first anime-focused model built on Cosmos-Predict2, which is an entirely different architecture. This gives it a different baseline for handling composition, anatomy, and style, rather than inheriting SDXL's strengths and weaknesses.

What are Anima's technical specifications?

Anima uses a 2 billion parameter diffusion transformer based on NVIDIA's Cosmos-Predict2 architecture. It pairs with a Qwen-3 0.6B text encoder and the Qwen Image VAE. Best output at 1024x1024 (1MP). Default inference uses 30 steps with CFG 4 and the er_sde sampler. Runs natively in ComfyUI. Lightweight at 7GB unquantized, making it accessible on consumer GPUs.

Spec Details
DeveloperCircleStone Labs (in collaboration with Comfy Org)
Base ArchitectureNVIDIA Cosmos-Predict2 (2B text-to-image)
Parameters2 billion
Text EncoderQwen-3 0.6B base
VAEQwen Image VAE
Best Resolution1024x1024 (1MP). Supports up to ~2MP before artifacts.
Default Steps30 (diminishing returns above this)
Default CFG4 (tune up to 6-7 for tighter prompt adherence)
Default Samplerer_sde (flat colors, sharp lines)
Alt Samplerseuler_a (softer linework), dpmpp_2m_sde_gpu (more variety)
Model Size~7GB unquantized (runs on consumer GPUs)
Training DataMillions of anime images + ~800k artistic images (LAION-POP ye-pop + DeviantArt, filtered)
Data CutoffSeptember 2025
Synthetic DataNone used
Prompt FormatDanbooru tags, natural language, or mixed
LicenseCircleStone Labs Non-Commercial + NVIDIA Open Model License (Derivative)
ComfyUI AccessNative support on Floyo
StatusPreview 3 checkpoint (still in training)

What can you create with Anima?

Anima covers anime character illustrations, concept art, fantasy scenes, multi-character compositions, and non-anime artistic styles including painterly looks and stylized illustrations. It is not built for photorealism or accurate text rendering. Best results come from detailed prompts (at least two sentences) with quality tags and clear subject descriptions.

Capability What It Does Use Case
Character DesignGenerates detailed anime characters with consistent styling. Knows a large vocabulary of anime characters, series, and artist styles.OC creation, fan art, visual novels, webtoons
Concept ArtCreates fantasy scenes, environmental art, and stylized illustrations. Works well for mood boards and early-stage design exploration.Game pre-production, animation pitches, world-building
Danbooru Tag PromptingAccepts Danbooru-style tags (1girl, long hair, school uniform) for precise control. Mix with natural language for flexibility.Rapid iteration, familiar to anime illustrators
Artist Style EmulationSpecify an artist with the @artist_name prefix. Dataset includes less common artists with 50-100 post counts.Style studies, reference matching, aesthetic exploration
Multi-Character ScenesGenerate multiple characters in one image. Describe each character's appearance explicitly for best results (names alone can confuse the model).Group shots, team portraits, narrative scenes
Non-Anime IllustrationHandles painterly looks, stylized portraits, and non-photorealistic artwork beyond the anime space. Trained on filtered DeviantArt and LAION-POP.Children's book art, editorial illustration, poster design
How does Anima compare to other anime models?

Anima's feature set is tuned for illustrators, not generalists. The training data is curated, the prompt system is flexible, and the architecture is lightweight. Every design choice targets the same goal: making an anime-focused model that runs on consumer hardware and produces output that looks hand-drawn, not generated.

Cosmos-Predict2 Architecture

Anima is the first anime-focused model built on NVIDIA's Cosmos-Predict2, breaking from the SDXL-based lineage that dominates the anime model space. This gives it different strengths: stronger prompt coherence, better composition handling, and a different baseline for how anatomy and style are rendered.

Flexible Prompt Format

Use Danbooru-style tags (1girl, long hair, school uniform), natural language sentences, or mix both in the same prompt. The model understands quality tags like "masterpiece, best quality" and artist references via @artist_name. This flexibility means you can write prompts the way you think, not the way the model expects.

Lightweight Footprint

At 7GB unquantized and 2 billion parameters, Anima runs on consumer GPUs. This is significantly smaller than typical image models in its quality class. Fast to load, fast to iterate. On Floyo's H100 NVL GPUs, generation is fast enough for rapid prompt exploration.

Clean Training Data

No synthetic data. Training uses real images from anime sources plus curated artistic content from LAION-POP (ye-pop subset) and DeviantArt. Both non-anime datasets were filtered to exclude photos, keeping the model focused on illustration. This is a deliberate choice to avoid the common issue of synthetic training data causing style collapse.

Sampler Variety

Different samplers produce different artistic looks. The default er_sde gives flat colors and sharp lines (clean anime style). euler_a produces softer, thinner linework (closer to traditional illustration). dpmpp_2m_sde_gpu offers more creative variety per seed. Swapping samplers lets you shift aesthetic without changing your prompt.

Extended Artist Vocabulary

Preview 3 expanded training to include less common artists (roughly 50-100 post count). This means the model can emulate a wider range of styles, not the most popular ones only. Niche aesthetics and lesser-known illustrators are better represented than in earlier previews.

Non-Photorealistic Focus

The model is deliberately not trained for realism. This is a strength for illustrators, not a limitation. Output maintains illustration quality consistently without drifting toward the uncanny valley that generalist models produce when asked for anime styles.

How does Anima work?

Anima is a diffusion transformer built on NVIDIA's Cosmos-Predict2 architecture. It uses a Qwen-3 0.6B text encoder to process prompts and the Qwen Image VAE to encode and decode image latents. The ModelSamplingAuraFlow node controls the noise schedule during generation, with a default shift value of 3.

The architecture is a departure from SDXL derivatives that dominate anime models. Cosmos-Predict2 was originally designed by NVIDIA for general-purpose text-to-image. CircleStone Labs fine-tuned it heavily on anime and artistic data to specialize the output. Because the base is different from SDXL, Anima handles prompts and composition differently from its competitors.

Prompting is flexible. Quality tags like "masterpiece, best quality" set the output quality floor. Character descriptions use either anime-specific tags (1girl, long hair) or natural language sentences. Artist style can be invoked with the @artist_name prefix. Mixing formats in one prompt is supported and often produces the best results. The default negative prompt filters low-quality outputs; you can extend it with anything specific you want to avoid.

On Floyo, Anima runs through ComfyUI nodes on H100 NVL GPUs. The workflow is pre-configured with sensible defaults (CFG 4, 30 steps, er_sde sampler). You can chain it with other ComfyUI nodes to build complete illustration pipelines: generate with Anima, upscale with a dedicated upscaler, then apply post-processing like color adjustment.

Fair warning: This is a preview checkpoint. The model is still training. Fine details and overall aesthetics will improve in the final release. Resolutions above ~2MP can start breaking down. The Qwen-3 0.6B text encoder is small by current standards (most models use 4B+), which can limit complex prompt understanding. Inference is slower than SDXL-based anime alternatives. The current license is non-commercial only.

Frequently Asked Questions

Common questions about running Anima on Floyo.

Is Anima free to use on Floyo?

You can start with Floyo's free pricing plan. To continue using the service beyond the free tier, upgrade your Floyo pricing plan. Anima is open-source, so there is no additional API cost beyond your Floyo plan. Note that the model license is non-commercial, so generated output cannot be used for commercial purposes until CircleStone Labs finalizes their commercial licensing.

How do I run Anima without installing anything?

Open Floyo in your browser, find the Anima Preview 3 Text to Image workflow (search "Anima" in the template library), and click Run. Write your prompt in the text field and hit generate. Floyo handles the GPU, ComfyUI environment, and model weights. No local install, no Python setup.

Who made Anima?

CircleStone Labs in collaboration with Comfy Org. It is built on NVIDIA's Cosmos-Predict2-2B-Text2Image base model. The Preview 3 checkpoint expanded training for less common artists and ran significantly longer at 1024 resolution than Preview 2.

What resolution should I use?

1024x1024 (1MP) is the sweet spot. The model was trained primarily at 1024. You can push to about 2MP (1536x1024 or similar) but expect some artifacts at higher resolutions since highres training is still in progress. For larger outputs, pair with an upscaler workflow.

How should I prompt Anima?

Use at least two sentences. Mix Danbooru tags (1girl, long hair, school uniform) with natural language descriptions. Add quality tags like "masterpiece, best quality" to the positive prompt. Reference artists with @artist_name. For multi-character scenes, describe each character's appearance instead of listing names only. Keep the default negative prompt, which filters low-quality output.

How does Anima compare to NoobAI or Illustrious?

NoobAI and Illustrious are SDXL-based with mature LoRA ecosystems and faster inference. Anima is built on Cosmos-Predict2 (entirely different lineage) and is smaller at 2B parameters. For production use today, NoobAI and Illustrious have the edge due to ecosystem maturity. Anima is valuable as a pioneer on a new architecture. Try both and see which aesthetic fits your work.

Can I use Anima for photorealistic images?

No. The model is deliberately not trained for realism. Non-anime training data was filtered to exclude photos. For photorealism, use a different model like Wan 2.7, Z-Image Turbo, or Lumina Image 2.0. You can chain multiple models in one Floyo workflow if you need both styles in a project.

Can I use Anima output commercially?

Not currently. Anima is released under the CircleStone Labs Non-Commercial License, and as a derivative of NVIDIA's Cosmos-Predict2, it is also subject to the NVIDIA Open Model License Agreement. CircleStone Labs has stated that commercial licensing details are still being worked out. For commercial illustration work, use a model with clear commercial rights like Z-Image Turbo or Lumina Image 2.0.

Try Anima on Floyo

Anime illustrations, character design, and concept art from a 2B parameter model built on NVIDIA's Cosmos-Predict2. Run it in your browser.

Try Anima Now → Browse All Models

Related Reading

Character and Concept Design on Floyo

Film and Animation Workflows on Floyo

Top AI Models on Floyo

Last updated: April 2026. Specs from CircleStone Labs HuggingFace model card (circlestone-labs/Anima), Civitai Anima listing, NVIDIA Cosmos-Predict2 documentation, and third-party architecture reviews.

Anima Preview 3 - Text to Image

Anima2

character design

concept art

fantasy

Text2Image

Generate images with Anima 2, a model built for anime and fantasy art. Write a prompt, set your resolution, and get stylized results in one run. Free to try.

Anima Preview 3 - Text to Image

Generate images with Anima 2, a model built for anime and fantasy art. Write a prompt, set your resolution, and get stylized results in one run. Free to try.

Table of Contents