After having been proven effective in creating both texts and artwork, machine learning is now being developed to generate entire videos.
A software company called Runaway announced that it is working on a project called Gen1 that will be able to produce entire videos in a few seconds, working off a prompt from a user.
The company, which is also indirectly related to the Stable Diffusion image creation model, said that as an interim step it will soon launch a tool that will allow replacing the background of videos. In the demonstration footage the company published, you can see how a video of a person walking down the street can be turned into a Lego version of the same video at the click of a button, or alternatively the background or characters that appear in the video can be changed. As part of the video creation, you can also define whether the video will be static or dynamic, as well as the appropriate type of shot.
The race to create a creative artificial intelligence model for building videos began several months ago, when Meta (Facebook's parent company) announced that it was developing such a tool for its users. Since then, Google has also announced the development of an application for producing videos from text, but so far no application has been released to the general public.
Even if the creative tools have not yet been distributed to the general public, this does not mean that artificial intelligence cannot already be used to produce videos. Netflix published a short animated video yesterday and surprised the surfers when at the end of the video the credit for creating the background was given to artificial intelligence. Netflix itself has yet to officially respond to the piece, but it looks like the future of video design and editing is about to change soon.