it could become a tool for the NSFW party and people who may have more sinister intentions…

That’s literally an issue with ANY sort of AI generator.

You can easily brainwash it to generate unethical or inappropriate content. Literally, everything has a possibility for being misused. I don’t see how that’s specific to OpenAI’s Sora.

Personally, I think this is a very cool feature as it gives creative individuals a lot of ways now to develop new videos that could be used as stock footage

I feel like OpenAI’s Sora is going to be very similar to the style of OpenAI’s DALL-E engine, because I’m assuming it works off of DALL-E except it generates hundreds of individual video frames and compounds them together.

This part of their website does mention that Sora understands how items provided in a user’s prompt are understood of their physical form and application in the real world, which indeed does provide a more realistic aspect to it.

I’d love to hear some of your insight though, could you elaborate?

I mean it’s easy enough to tell the difference between an AI generated image and a normal image, right? :sweat_smile:
Pretty confident the same would go for a video.

1 Like