OpenAI Releases Groundbreaking Video Model Sora
In a recent report by Guotai Junan Research, it was revealed that OpenAI has released its first Vincent video model, Sora, which has had a shocking effect on the industry. The Sora model has the capability to generate a 60-second long video based on text descriptions and can accurately understand elements such as color and style to create video content with rich facial expressions and vivid emotions.
Sora’s release is seen as a milestone in the AIGC field, with three major highlights. Firstly, it can maintain a high degree of smoothness and stability in the main body and background of the video, achieving a 60-second long video. Secondly, it can implement multi-angle shots in one video, with logical and smooth storyboard switching. Lastly, Sora has the ability to understand the real world, handling details such as light and shadow reflection, movement patterns, and lens movement, greatly improving the sense of reality.
This release is expected to promote the rapid development of the AI multi-modal field, bringing about profound changes in AI creation and related fields. The scope of AI empowerment will be further expanded, and multi-modal related training and reasoning applications will enhance the relevance of computing infrastructure.
This groundbreaking development in AI has the potential to revolutionize the way video content is created and consumed, and it will be interesting to see how Sora’s capabilities are leveraged in various industries.
(Source: Financial Associated Press)