product update

Adobe Firefly now learns custom visual styles from user-uploaded images

TL;DR

Adobe is rolling out custom models for Firefly, allowing creators to train the generative model on 10-30 of their own images to generate new content matching their specific visual style. The feature costs 500 credits per training session and supports three methods: photography style, illustration style, and character consistency.

2 min read
0

Adobe Firefly Now Learns Your Visual Style With Custom Models

Adobe is launching custom models for Firefly in public beta today, enabling creators to train the generative model on their own aesthetic to produce images that consistently match their distinctive style.

How Custom Models Work

The feature allows users to upload between 10 and 30 JPG or PNG images (minimum 1024×1024 resolution, consistent aspect ratio) to train Firefly on their unique visual characteristics. The model learns and preserves specific details including stroke weight, color palettes, lighting, and character features across generated images.

Adobe says this lets creators "generate new ideas aligned to your aesthetic, reuse the model across projects, briefs and campaigns and produce at scale without losing what makes your work distinctive."

Each training session costs 500 credits from a user's monthly generative balance. The feature is currently available only to premium subscribers.

Three Training Methods

Photography style: Train on lighting, color, and mood to generate new shots with the same visual feel.

Illustration style: Create fresh illustrations that maintain your signature aesthetic.

Characters: Generate characters consistently across different scenes and narratives.

Adobe's guidance recommends uploading images with consistent style and color palette while avoiding low-resolution or blurry images. The company has published a step-by-step guide for optimal training results.

Broader Firefly Expansions

Adobe is also expanding Firefly's model access to 30+ top AI image and video models, including Google's Nano Banana 2 and Veo 3.1, Runway's Gen-4.5, and Adobe's Firefly Image Model 5.

Separately, Adobe opened a private beta for Project Moonlight, an agentic assistant with a conversational interface that lets users describe creative goals in turn-by-turn chat. These assistants will execute actions across Photoshop, Express, and Acrobat, with users able to refine and iterate on results.

Adobe is offering unlimited video and image generation through April 22, 2026.

What This Means

Custom models represent a meaningful shift toward personalization in generative AI—moving beyond one-size-fits-all outputs to tools that respect and replicate individual creative voice. For freelancers and creative teams managing multiple projects, the ability to maintain consistent style at scale without manual refinement could significantly reduce iteration cycles. The 500-credit cost per training suggests Adobe is monetizing personalization rather than including it in base subscriptions, which may limit adoption among casual users. The parallel launch of Project Moonlight indicates Adobe's broader pivot toward agentic workflows, positioning its tools as autonomous collaborators rather than simple prompt-response engines.

Related Articles

product update

OpenAI launches Trusted Contact feature allowing ChatGPT to alert designated friends during suicide risk

OpenAI has launched Trusted Contact for ChatGPT, allowing users 18+ to designate one adult contact who can be notified if the company's trained human review team detects serious self-harm risk. The feature comes after over 1 million of ChatGPT's 800 million weekly users expressed suicidal thoughts in conversations, and follows a 2025 wrongful death lawsuit.

product update

GitHub Reduces Token Usage in Copilot Agentic Workflows Running on Pull Requests

GitHub has optimized token usage in its production agentic workflows that run on every pull request. The company instrumented its own Copilot workflows to identify inefficiencies and built agents to address them, aiming to reduce accumulated API costs.

product update

GitHub reduces token costs in production agentic workflows with instrumentation and automated fixes

GitHub details how it reduced token consumption in production agentic workflows that run on every pull request. The company instrumented its own workflows to identify inefficiencies and built automated agents to address them.

product update

OpenAI launches GPT-Realtime-2 with GPT-5-class reasoning, adds real-time translation across 70 languages

OpenAI has added three voice intelligence features to its Realtime API: GPT-Realtime-2 with GPT-5-class reasoning for complex conversational requests, GPT-Realtime-Translate supporting 70 input languages and 13 output languages, and GPT-Realtime-Whisper for live speech-to-text transcription. Translation and transcription are billed by the minute, while GPT-Realtime-2 uses token-based pricing.

Comments

Loading...