Adobe’s message from the MAX 2025 keynote in Los Angeles was loud and clear: AI is no longer just a generative tool; it’s your new creative assistant. The company unveiled a sweeping set of updates across its Creative Cloud suite, centered on ‘agentic AI’ you can actually talk to (or at least text), a major new Firefly model, and deep integrations with third-party AI – including technology from Google and Topaz Labs.
The announcement comes as Adobe continues to redefine the landscape of digital creativity, integrating AI in ways that promise to revolutionize workflows for photographers, designers, and video editors alike. This development follows a broader industry trend towards more intuitive, AI-driven interfaces, reminiscent of the transformative impact of the computer mouse in the 1980s.
AI: The New Creative Assistant
Fast forward to 2025. AI may have gone mainstream in 2023, but its adoption has been blindingly fast compared to the two-decade crawl of the mouse. The mouse isn’t dead, of course, but now we can literally tell our software what we want it to do. Text prompting has been integrated on so many levels – and this will only become better over the next few years. And we can applaud this direction as it cuts down our post-production time.
For photographers, the Adobe Max Keynote’s ‘wow’ headline was the introduction of AI-assisted culling to Lightroom and Lightroom Classic. This has been on the photographers’ wishlist for a while, and it’s great to see it finally go live.
AI-Assisted Culling: A Game Changer
AI ‘assisted culling’ works like this: Lightroom analyses your shoot – say, 3000 images – and then, using simple checkboxes and a slider, lets you boil them down by automatically rejecting misfires, blinking shots, and unfocused or badly exposed images. But what if all your images are technically good? That’s where the ‘stacking’ function comes into play. The AI identifies similar images – like a burst from a sporting play or a series of portraits in the same spot – chooses the ‘best’ one, and stacks the rest underneath it. This could be a godsend for sports or wedding photographers who shoot tonnes of frames and need to deliver hero shots in a rush. It’s also just a much nicer way to get an overview of a shoot without staring at a daunting grid of 3000 thumbnails!
Innovations Across the Creative Suite
Another ‘wow’ moment initially appeared during the Adobe Illustrator segment – a feature stemming from the ‘Project Turntable’ sneak peek we saw previously. It allows you to rotate a 2D vector image by letting AI work out what that object looks like from different angles – and forming it into what looks like a 3D object. This technology is now being applied to photography.
As other AI models are integrated into the Adobe suite, photographer Terry White demonstrated a portrait where the sitter was facing slightly away from the camera. After sending the image to Photoshop, he selected his AI model (in this case, Google’s Gemini 2.5) and he prompted to ‘turn him forward and maintain his disposition’. The sitter was altered to face forward while looking exactly the same. This begs the question – is this still a real photographic portrait?
New Tools and Features
We also saw the addition of a ‘Colour Variance’ slider. If you have a portrait with uneven skin tone – like razor rash (red blotches), uneven white balance, or mottled light – this slider can quickly adjust for more uniform color across the subject’s skin.
On a personal note, my big complaint/suggestion last year at Adobe MAX was that slideshows were still only 1080p after more than 10 years. Now you can create incredible slideshow videos to watch on a huge 4K TV. If you haven’t tried this – you should give it a go.
Other notable updates include Leica tethering, auto dust spot removal (great if you’re still shooting on a DSLR with a dirty sensor), more filters for Library and Smart Collections, and a number of performance enhancements.
The Future of AI in Creative Cloud
The standout feature of the keynote was the introduction of new AI Assistants in beta for Photoshop and Adobe Express. This moves Adobe’s AI from simple text-prompt generation into a conversational, ‘agentic’ experience. Instead of hunting through menus, users can now instruct the assistant in plain language to perform complex and repetitive tasks. During the demo, a user simply typed, ‘Rename all my layers based on my content,’ and the AI organized the file.
Other commands, like “make the background look like a sunset and harmonize the lighting,” were executed in seconds. This new assistant can also provide personalized recommendations and tutorials to help complete a project.
Firefly Evolves: Image 5 and Multimedia
Adobe’s core generative AI, Firefly, received its most significant update yet, expanding beyond still images into a full multimedia creation studio. Firefly Image Model 5: Now in public beta, this new model is a major leap in quality. It produces more photorealistic images at a native 4-megapixel (4MP) resolution, offering far greater detail. It also powers a new “Prompt to Edit” feature, allowing users to make precise edits to existing images using text.
Generate Soundtrack: This new tool (in public beta) creates custom, studio-quality, royalty-free music. Users can tailor the soundtrack to match the mood and, crucially, the precise length of their video clips.
Generate Speech: In partnership with ElevenLabs, this feature (public beta) is a powerful text-to-speech generator for creating realistic, multilingual voiceovers directly within Firefly. Firefly Video Editor: A new web-based, timeline video editor (in private beta) was announced, allowing creators to generate, organize, and sequence video clips in one place.
Photoshop and Premiere Pro Updates
Generative Upscale with Topaz: In a major third-party integration, Photoshop’s Generative Upscale feature is now powered by technology from Topaz Labs. This allows users to upscale low-resolution images to 4K with realistic detail, a huge benefit for restoring old photos or working with small source files.
Harmonize is Here: The popular Harmonize feature, which automatically matches the color, lighting, and shadows of a composited object to its new background, is now out of beta and in the full version of Photoshop.
AI Object Mask: This new (public beta) feature is a massive time-saver for video editors. It uses AI to automatically identify and isolate people or objects in a video, creating a trackable mask. This eliminates hours of manual rotoscoping needed for color grading, blurring, or applying effects to specific parts of a shot.
Premiere on iPhone & YouTube Shorts: Adobe launched a new, free, and watermark-free video editing app, Premiere on iPhone. Alongside this, a new partnership with YouTube creates a “Create for YouTube Shorts” space, allowing mobile creators to edit with pro-level tools and publish directly to the Shorts platform.
An Open Ecosystem and a Look at the Future
Underpinning all these announcements was Adobe’s new, more open strategy. By integrating partner models from Google, Topaz Labs, ElevenLabs, OpenAI, and Runway, Adobe is positioning Creative Cloud as a central hub for all the best AI tools, not just its own. The company also “snuck” future tech, including Project Moonlight, an AI assistant designed to work across all Adobe apps and learn from a user’s assets, and Project Graph, a node-based tool for building and automating complex creative workflows.
As Adobe continues to push the boundaries of what’s possible with AI, the implications for creative professionals are profound. The move represents a significant shift towards more personalized, efficient, and powerful creative processes, setting a new standard for the industry. More to come.