Luma AI has introduced Uni-1, its first model designed to combine image understanding and generation within a single architecture. The model is built on a decoder-only autoregressive transformer, which processes text and images in an interleaved sequence, allowing the system to reason and create in the same step.

According to Luma, Uni-1 achieves state-of-the-art results on logic-based benchmarks like RISEBench, which evaluates reasoning-informed visual editing. The company claims the model narrowly surpasses competitor models from Google and OpenAI in this area. For object recognition tasks, Uni-1's performance is said to be nearly on par with Google's Gemini 3 Pro. The new model will power Luma Agents, a platform for creative workflows.