Releases

April 2026 AI Roundup: Sora Gone, Hailuo 2.3 and Qwen Image 2.0 Live, and What's Coming

April 2026 arrives with OpenAI's Sora app days from shutting down, Hailuo 2.3 and Qwen Image 2.0 now in production, MiniMax posting $790M in 2025 revenue, and the industry settling into a clearer picture of which models are here to stay.

April 3, 20269 min read

The first days of April 2026 arrived with one major product officially in its final weeks. OpenAI's Sora app closes April 26. The API runs until September 24. The Disney deal is dead. The team moves to robotics research. It is one of the most compressed product cycles in AI history — six months from launch to shutdown — and it has sharpened everyone's thinking about what AI video actually needs to be to survive as a category.

Here is where everything stands as April begins.


Sora: Export Your Content Now

The deadline is April 26. If you have content in your Sora library, download it before then. After the app closes, OpenAI has not confirmed whether data will remain accessible.

For anyone who had built workflows around Sora 2 Pro Storyboard or the Sora API, the migration window is longer — the API runs until September 24, 2026. But the time to evaluate alternatives is now, not in September.

The Sora shutdown story is covered in detail separately. The short version: roughly $1 million per day in compute costs, fewer than 500,000 active users by February, competitors matching quality at a fraction of the generation time.

Sora alternatives on Cliprise:

For multi-shot narrative video with scene planning: Wan 2.6 — shot marker prompts, up to 15 seconds, native audio. For storyboard-style scene-by-scene control: Sora 2 Pro Storyboard remains on Cliprise until the API sunset. For maximum visual quality: Kling 3.0 at native 4K. For physics and environmental accuracy: Veo 3.1 Quality. For video editing of existing footage: Runway Aleph.


Hailuo 2.3: Now Live on Cliprise

MiniMax's Hailuo 2.3 — released October 2025 and now integrated into Cliprise — is the most capable Hailuo model to date. The improvements over Hailuo 02 are specific and significant: better full-body motion for complex choreography, improved micro-expressions for expressive character performance, and stable style output for anime and illustration aesthetics.

The model comes in Standard and Fast variants. Fast reduces batch creation costs by up to 50% for I2V workflows — making rapid prompt iteration realistic before committing to Standard for final delivery.

One note: Hailuo 2.3 does not support last-frame conditioning, which Hailuo 02 had. For workflows that depend on specifying both the opening and closing frame, Hailuo 02 remains available.

MiniMax also released financial results in March 2026. 2025 revenue reached $790 million — 158.9% growth year-over-year, with over 70% from international markets. The company's AI video tools generated over 370 million total videos globally through Hailuo. These are not startup metrics anymore. The Hailuo series is production infrastructure for a significant portion of the global creator economy.

Hailuo 2.3: Complete Guide


Qwen Image 2.0: Open-Source Image Generation at #1

Alibaba's Qwen Image 2.0, released February 10, 2026, is now integrated on Cliprise. The results from AI Arena — the blind human evaluation platform with over 10,000 comparison rounds — ranked it first in both text-to-image generation and image editing at launch.

The model runs on 7 billion parameters, down from 20B in the original, while improving performance across every major benchmark. Native 2K output. Professional typography handling for infographics, posters, and bilingual layouts. Unified generation and editing in one architecture.

The key differentiator on Cliprise is Chinese and bilingual text rendering. For content targeting Chinese-speaking markets, or any content that needs accurate non-Latin script generation within images — labels, packaging, signage — Qwen Image 2.0 is the right model in the lineup.

Qwen Image: Complete Guide


What Launched in March 2026

The last week of March 2026 was notable beyond just the Sora announcement.

LTX 2.3 from Lightricks released as an open-source 4K video model with native audio. 22 billion parameters, Apache 2.0 license, generates at 50fps. Already covered in our March roundup.

Helios from ByteDance and Peking University demonstrated real-time 60-second video generation on a single H100 GPU — an infrastructure milestone that changes what local deployment of long-form video generation looks like. Covered in our Helios news piece.

GPT-5.4 from OpenAI launched March 5 with a 1 million token context window and 33% fewer factual errors compared to GPT-5.2. This is a language model, not image/video, but the compute freed from shutting down Sora is likely flowing into continued GPT-5.x development.


What to Watch in April

A few things are moving in April that will change the model landscape:

Meta Mango. Meta has been developing an image and video model internally codenamed "Mango" alongside a text model called "Avocado." A first-half 2026 release was targeted per December 2025 reporting. If this lands in April, it will be the first major Meta entry into the image/video generation market with a production product.

Post-Sora consolidation. With Sora out of the market, developer workflows are consolidating toward the remaining APIs. Watch API usage data and developer community discussion for which models are absorbing Sora's migration traffic.

Kling V3 broader rollout. Kling 3.0 launched in February 2026 with capabilities that exceed Sora on most visual quality metrics. The broader rollout of V3 continues through April.

Nano Banana Pro in more enterprise tools. Adobe, Canva, and Figma announced integrations at Nano Banana Pro's November launch. Additional enterprise tool integrations were in progress — expect more announcements through Q2.


Cliprise Model Status: April 2026

All current models remain available on Cliprise. Here is a quick reference for the major categories as of this month:

Video — Best current options:

Use caseModel
Maximum quality, 4K/60fpsKling 3.0
Native audio + multi-shotWan 2.6
Physics + environmentVeo 3.1 Quality
Character performance + micro-expressionsHailuo 2.3
Fast iterationVeo 3.1 Fast
Video editing existing footageRunway Aleph

Image — Best current options:

Use caseModel
PhotorealismFlux 2
Reasoning + 4K + Google SearchNano Banana Pro
Speed + Pro qualityNano Banana 2
Precise iterative editingGPT Image 1.5
Bilingual / Chinese textQwen Image 2.0
Text-in-image / typographyIdeogram v3
ArtisticMidjourney

Ready to Create?

Put your new knowledge into practice with Cliprise.

Start Creating
Featured on Super Launch