top of page
Aistetic B2B - 3D Body Modeling and Size Recommendations

THE FASHION TECH BRIEFING

Everybody’s talking about Meta’s Movie Gen
What’s cool about it and how does it compare to the latest? 

Newsletter #26 | Read time • 3 mins

Share this Article on: 

Unknown.png

Founder & CEO

Duncan McKay 

LinkedIn

This month, Meta dropped Movie Gen—a shiny new set of GenAI models designed to make video generation more accessible in some pretty exciting ways. So, I thought I’d take a quick peek at what Movie Gen can actually do and what it means for the future of video creation. 

​

The first thing that Movie Gen does is text prompted video generation. It does this well. The model creates high-quality videos up to 16 seconds long (above example), perfectly aligned with your text prompts. It can generate content in various sizes, resolutions, and lengths to suit your needs. The AI is able to understand the context, objects, motion and deliver a realism that is amazingly detailed.  These are high-quality, 1080p HD videos with different aspect ratios and synchronized audio. Use cases include collection launches, brand videos, & short form social content in tune with your brand values. This could be your next Instagram reel. Meta has trained the video generation model on a staggering 30B parameters!

Example of Meta's Movie Gen Personalization 

What makes these models even more intriguing? Their knack for personalization. The system can take a person’s image alongside a text-based prompt and add them into the scene in a realistic and plausible way maintaining the identity of the person whilst offering rich visual details from the text prompt. 

Example of Meta's Movie Gen Editing

The ability to edit is a potential game changer. Often one of the challenges of GenAI is the ability to customise, to adjust and fine tune. You can find yourself having to start again, right from the beginning which can be irritating and time consuming. Meta has introduced another level of precision whilst editing offering more “localised” edits - the ability to add, remove and adjust certain elements as you can here: 

Example of Meta's Movie Gen Audio 

You can also add and generate audio that syncs to the video you have just created. Meta trained a separate model on 13B parameters to learn how to generate audio, synchronise incorporating in ambient noise, sound effects. 

​

There are limitations. These models do not offer voice overs (yet) and struggle with more complex scenes or elements that involve transformations (for example the shattering of glass).


So how does Meta's Movie Gen compare?

 
To evaluate how good Movie Gen is, Meta assessed the models with human feedback against the text prompts (1000 prompts) that were given. According to the published results, Movie Gen consistently delivers higher-quality videos, particularly in visual fidelity, and natural motion. It outperforms Runway Gen3 and OpenAI's Sora in motion consistency, likely due to its training and extensive video data. Additionally, Movie Gen excels in realness and aesthetic appeal, surpassing Runway Gen3, LumaLabs, and Kling1.5. The full results are here

 
Meta has set the new benchmark in GenAI video. 

 

It’s impressive and shows the richness of possibilities of what’s coming for your video

creativity and content.  

 

The best part? 

 

It’s all within reach.

 

Imagine what you could create.

​

--

​

PS. When you are ready to dive into more readily available AI-powered fashion tech, please do check out our Virtual Fitting Room and reach out for a chat

​

​

bottom of page