Imagine telling a computer to conjure up a scene, and it instantly creates a video! That’s the magic of OpenAI’s new AI model, called “Sora.”
Unlike other AI artists, Sora doesn’t just paint pictures; it brings them to life in moving images.
Before this, ChatGPT and DALL·E, OpenAI’s most dominant text and image generation models, have already been shaping the new market.
How does Sora work?
You simply give Sora a text description, like “a lunar lander mission to the moon,” and it uses its clever algorithms to turn those words into a video.
It can even handle complex scenes with multiple characters and specific movements.
Taking on X (formerly Twitter) Sam Altman, CEO, OpenAI, invited the users to reply to his tweet with a prompt to get a personalized Sora-generated AI video for them.
But is it perfect?
Not quite. While Sora is unique, it still needs some learning. Sometimes, it struggles with things like understanding physics or keeping track of details like left and right.
However, some videos generated by Sora, presented on OpenAI’s website as below show jaw-dropping realistic results.
OpenAI is making sure Sora doesn’t get into any trouble. They’re working with experts to test it for potential risks like fake news or biased content.
They’re also developing tools to identify misleading videos and make sure everyone knows who created each video.
Who can use Sora?
Right now, it’s still under development. Experts, artists, and filmmakers are helping test it out and give feedback. But if all goes well, it could be available to the public soon, by March or April!
Sora is a big step forward in AI video creation. It opens up exciting possibilities for storytelling, education, and even entertainment.
But it’s important to remember that AI is still learning, and we need to use it responsibly.