product update

Adobe Firefly now learns custom visual styles from user-uploaded images

TL;DR

Adobe is rolling out custom models for Firefly, allowing creators to train the generative model on 10-30 of their own images to generate new content matching their specific visual style. The feature costs 500 credits per training session and supports three methods: photography style, illustration style, and character consistency.

2 min read

Adobe Firefly Now Learns Your Visual Style With Custom Models

Adobe is launching custom models for Firefly in public beta today, enabling creators to train the generative model on their own aesthetic to produce images that consistently match their distinctive style.

How Custom Models Work

The feature allows users to upload between 10 and 30 JPG or PNG images (minimum 1024×1024 resolution, consistent aspect ratio) to train Firefly on their unique visual characteristics. The model learns and preserves specific details including stroke weight, color palettes, lighting, and character features across generated images.

Adobe says this lets creators "generate new ideas aligned to your aesthetic, reuse the model across projects, briefs and campaigns and produce at scale without losing what makes your work distinctive."

Each training session costs 500 credits from a user's monthly generative balance. The feature is currently available only to premium subscribers.

Three Training Methods

Photography style: Train on lighting, color, and mood to generate new shots with the same visual feel.

Illustration style: Create fresh illustrations that maintain your signature aesthetic.

Characters: Generate characters consistently across different scenes and narratives.

Adobe's guidance recommends uploading images with consistent style and color palette while avoiding low-resolution or blurry images. The company has published a step-by-step guide for optimal training results.

Broader Firefly Expansions

Adobe is also expanding Firefly's model access to 30+ top AI image and video models, including Google's Nano Banana 2 and Veo 3.1, Runway's Gen-4.5, and Adobe's Firefly Image Model 5.

Separately, Adobe opened a private beta for Project Moonlight, an agentic assistant with a conversational interface that lets users describe creative goals in turn-by-turn chat. These assistants will execute actions across Photoshop, Express, and Acrobat, with users able to refine and iterate on results.

Adobe is offering unlimited video and image generation through April 22, 2026.

What This Means

Custom models represent a meaningful shift toward personalization in generative AI—moving beyond one-size-fits-all outputs to tools that respect and replicate individual creative voice. For freelancers and creative teams managing multiple projects, the ability to maintain consistent style at scale without manual refinement could significantly reduce iteration cycles. The 500-credit cost per training suggests Adobe is monetizing personalization rather than including it in base subscriptions, which may limit adoption among casual users. The parallel launch of Project Moonlight indicates Adobe's broader pivot toward agentic workflows, positioning its tools as autonomous collaborators rather than simple prompt-response engines.

Related Articles

product update

Adobe launches Firefly Custom Models to let creators train AI on their own art

Adobe is rolling out Firefly Custom Models in public beta, allowing creators to train AI image generators on their own artistic assets. The tool preserves visual consistency across character designs, illustrations, and photography styles without exposing training data to Adobe's general models.

product update

ElevenLabs launches music marketplace for AI-generated tracks with no copyright protection

ElevenLabs has launched a music marketplace where users can publish and sell tracks created with its ElevenCreative AI music model, with the company claiming to have already generated nearly 14 million songs on the platform. The company has paid out over $11 million through its Voice Marketplace using the same model. However, AI-generated music lacks legal copyright protection, leaving all legal risk on users.

product update

Multiverse Computing launches API portal for compressed AI models to reduce cloud dependence

Multiverse Computing, a Spanish startup, has launched a self-serve API portal giving developers direct access to compressed versions of models from OpenAI, Meta, DeepSeek, and Mistral AI. The move targets enterprises seeking to reduce cloud infrastructure dependence and lower compute costs through edge deployment. The company claims its HyperNova 60B 2602 model delivers faster responses at lower cost than the original OpenAI model it was derived from.

product update

NVIDIA Nemotron 3 Super now available on Amazon Bedrock with 256K context window

NVIDIA Nemotron 3 Super, a hybrid Mixture of Experts model with 120B parameters and 12B active parameters, is now available as a fully managed model on Amazon Bedrock. The model supports up to 256K token context length and claims 5x higher throughput efficiency over the previous Nemotron Super and 2x higher accuracy on reasoning tasks.