Releases

Runway Launches Aleph: AI Video Editing Through Text Prompts

On July 25, 2025, Runway released Aleph — the first AI model purpose-built for in-context video editing rather than generation. Add, remove, restyle, relight, and generate new camera angles from existing footage using text instructions.

July 25, 20255 min read

The AI video market spent 2024 and early 2025 competing on generation — how well could each model turn a text prompt or image into a new video clip. On July 25, 2025, Runway introduced a different category entirely: Aleph, a model built specifically for editing footage that already exists.

Aleph is not a generator. You do not give it a description and receive a new video. You give it footage — a clip you shot, a scene you generated elsewhere — and you tell it what to change. Add an object that was not there. Remove a person from the background. Change the time of day. Generate what the scene would look like from a different camera angle. Relight the whole scene with different light direction. Restyle the footage into a different visual aesthetic.

The result is a video with your changes applied, with everything else preserved — not redrawn from scratch with the change incorporated, but your original footage with surgical modifications.


What In-Context Editing Means

Runway calls Aleph an "in-context" model. The term describes what distinguishes it from earlier approaches to AI video editing.

Earlier AI editing tools — including Runway's own previous generation — applied changes by running the footage through a generation pass. The model would see the input and produce an output that reflected the requested change, but the output was effectively a new generation influenced by the input, not a modification of the input itself. This meant the output could drift from the original in ways beyond the requested change. A relight request might change subtle background details. An object removal might alter the texture of the floor around it.

Aleph's in-context approach analyzes the footage's spatial structure — depth relationships, lighting direction, object positions, surface properties — before making any change. The model understands what is in the scene before it modifies anything. When you remove an object, the background behind it is reconstructed based on what that area of the scene logically contains. When you change the camera angle, the new angle is generated from a 3D reconstruction of the scene's geometry, not a hallucinated interpretation of what a different angle might look like.

This 3D spatial understanding is what enables the camera angle generation capability — arguably Aleph's most distinctive feature.


Camera Angle Generation

Most video production involves multiple camera angles. A scene shot from a wide angle needs coverage from over-the-shoulder, or from a low angle, or from above. Getting that coverage requires having cameras in all those positions during filming. If you did not have a second camera, you cannot get the shot in traditional post-production.

Aleph can generate a new camera angle from a single shot. Give it a wide angle and describe the angle you want — over-the-shoulder, aerial, reverse, three-quarter — and the model reconstructs the scene in 3D and renders what that angle would see. It is a post-production capability that previously required either a second physical camera or complex manual VFX reconstruction.

This alone changes the economics of video production for anyone who regularly wishes they had additional coverage of a scene.


Who Was Using Aleph at Launch

Runway noted at launch that the model was already in use by major studios, advertising agencies, architecture firms, gaming companies, and e-commerce teams through early enterprise access.

The enterprise positioning tracks with where the capability is most immediately valuable: productions that spent real money on a shoot and need post-production flexibility, brands that need to adapt footage for multiple markets without reshooting, game developers who need scene variations without rebuilding.

The launch positioned Aleph as a professional post-production tool, not a consumer generation toy. The specific use cases Runway highlighted at launch — reshooting without reshooting, environment transformation, period-appropriate visual treatment — are professional production problems where traditional solutions are either expensive or impossible.


Technical Constraints at Launch

Aleph operates on clips up to 5 seconds per generation pass. Maximum file size 64MB. Supported resolutions include 720x1280 and 960x960, with automatic cropping for dimensions outside those options. No audio generation — Aleph modifies visuals only.

For longer footage, process in 5-second segments with consistent prompting across segments. The constraint is architectural — in-context video editing at Aleph's quality level requires intensive per-frame processing that scales with clip length.

Prompts should start with an action verb (add, remove, change, replace, relight, restyle) and describe the scope of the change concisely. Simpler prompts targeting one change produce more coherent results than complex multi-element prompts.


Aleph on Cliprise

Runway Aleph is available on Cliprise as the platform's dedicated video-to-video editing model — the only model in the lineup that takes existing footage as input rather than generating from scratch.

For the full workflow guide: Runway Aleph: Complete Guide →


Ready to Create?

Put your new knowledge into practice with Cliprise.

Start Creating
Featured on Super Launch