Personally, I think this is a very cool feature as it gives creative individuals a lot of ways now to develop new videos that could be used as stock footage, however, it’s important to note that it could become a tool for the NSFW party and people who may have more sinister intentions…
Well, this is very interesting. I just went through their website and it looks like a good tool, but now how are we supposed to tell the difference between what’s a real video and what’s an AI generated video? It’s already hard to tell as it is, with special effects and everything.
It’s interesting. I wonder how copyright is going to work.
Will OpenAi own all content created by the AI, will people who made the prompt have the rights to the contents? Will everybody have rights to the content it produced?
There is honestly so much happening so quickly, in such little time.
This is being discussed since Image AI Generation is invented. It is not even clear who has full ownership of the images generated by AI/Dall-E, and now OpenAI starts something new with the same problems again.
I think we cant do anything else than waiting and not saying “I made the image”. Anyways i’m pretty sure OpenAI at least made clear if its allowed to be used for commercional things, but i’m not sure right now if they allow it or if they dont.
it could become a tool for the NSFW party and people who may have more sinister intentions…
That’s literally an issue with ANY sort of AI generator.
You can easily brainwash it to generate unethical or inappropriate content. Literally, everything has a possibility for being misused. I don’t see how that’s specific to OpenAI’s Sora.
Personally, I think this is a very cool feature as it gives creative individuals a lot of ways now to develop new videos that could be used as stock footage
I feel like OpenAI’s Sora is going to be very similar to the style of OpenAI’s DALL-E engine, because I’m assuming it works off of DALL-E except it generates hundreds of individual video frames and compounds them together.
This part of their websitedoes mention that Sora understands how items provided in a user’s prompt are understood of their physical form and application in the real world, which indeed does provide a more realistic aspect to it.
I’d love to hear some of your insight though, could you elaborate?
I mean it’s easy enough to tell the difference between an AI generated image and a normal image, right?
Pretty confident the same would go for a video.