March 26, 2026: ByteDance started rolling out Seedance 2.0 in CapCut. March 24, 2026: OpenAI shut down Sora.
Two days apart. The AI video market compressed its biggest departure and its biggest consumer launch into the same week.
ByteDance's rollout is phased and geographically deliberate: Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam first. The United States and Europe are not in the initial rollout. This is not a technical decision. It reflects the legal situation that preceded this launch and the regulatory environments that still surround it.
What Happened in February
Seedance 2.0 was originally scheduled for a broader global launch in February 2026. Within days of early access going live, the problems started.
Users generated AI videos depicting Tom Cruise and Brad Pitt in a fight scene. Others created clips featuring celebrities including Kim Kardashian and Ye. The model was producing realistic simulations of specific real people's faces and movements without any consent mechanism.
The Motion Picture Association's response was immediate. Chairman Charles Rivkin stated that in a single day, the Chinese AI service had engaged in unauthorized use of U.S. copyrighted works on a massive scale. Disney and Paramount Skydance sent ByteDance cease-and-desist letters accusing the company of using their characters to train the model. Warner Bros, Netflix, and SAG-AFTRA added their voices to the condemnation. Netflix described Seedance as "a high-speed piracy engine." One documented clip — a scene from the film F1 recreated at full visual quality — reportedly cost nine cents to generate.
ByteDance paused the global rollout on March 15 and began adding safeguards.
What Changed Before the Relaunch
The version of Seedance 2.0 in CapCut is different from the February version in several ways.
Real-face blocking. The model no longer generates videos from images or videos containing real faces. This directly removes the deepfake capability that triggered the MPA response.
IP filters. Copyrighted characters are blocked from generation. The AI-rendered Disney, Marvel, and Warner Bros characters that the MPA had documented cannot be produced.
C2PA watermarking. All output carries both visible watermarks and embedded C2PA Content Credentials — the industry-standard protocol for identifying AI-generated content across platforms. The invisible watermark persists even after the content is shared or altered off-platform.
Third-party red-teaming. ByteDance brought in an external red-team partner to test the safeguards before relaunch.
The open question that studios and regulators are still answering: are these safeguards sufficient? Red-teaming documentation suggests that creative prompting can still produce what researchers describe as "likeness-adjacent" content — characters that evoke specific real people without technically reproducing them. ByteDance has committed to proactive monitoring and continued iteration.
What Seedance 2.0 Actually Does
Beyond the controversy, the model is technically significant. It is one of the strongest video generation models currently available in terms of multimodal control.
Seedance 2.0 accepts text, images, videos, and audio as inputs simultaneously — and generates a video that incorporates all of them in a single pass. Character reference from an image, camera movement reference from a video, audio synchronization from an audio file, all at once. No other publicly accessible model supports this combination of inputs at equivalent output quality.
At CapCut launch: clips up to 15 seconds, six aspect ratios (16:9, 9:16, 1:1, 4:3, 3:4, 4:5), native audio-visual synchronization. Text prompt adherence is strong enough that reference images are optional — a detailed text description alone produces usable results.
ByteDance specifically highlights categories the model handles well: cooking tutorials, fitness content, product overviews, action and motion sequences. These are categories where AI video has historically produced artifacts and inconsistency. Seedance 2.0's physics simulation handles motion-heavy content — a physical action, a stunt-style sequence — substantially better than most competing models at the same generation cost.
The Distribution Asymmetry
The Seedance 2.0 story is as much about distribution as it is about the model.
CapCut has over one billion monthly active users globally. Every CapCut user who opens the app in a supported market now has access to AI video generation without signing up for a separate service, without a new subscription, and without a learning curve. The workflow is: open CapCut, start editing, generate video with a text prompt.
Sora, by contrast, was a standalone app competing for downloads in a crowded market. Sora peaked at approximately 3.3 million monthly downloads in November 2025. CapCut's installation base is more than 300 times that figure.
The lesson the AI video market is absorbing in late March 2026: distribution-first wins. Seedance 2.0 meets users inside the editing tool they already use. The US and European markets remain excluded for now — ByteDance has not given a timeline, and the legal landscape in those markets remains significantly more complex than Southeast Asia and Latin America.
