Greetings Forums, it’s been over a year since we were announced that GPT 4 was available to the public.
A little over a year later OpenAi has released another spooky but wildly impressive demo of the the new GPT-4o AI.
GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Here is the introductory video and boy is it scary:
pretty advanced, would love to be updated on more information on GPT-4o. it’s honestly crazy how all of this is being developed, excited for the tons of possibilities with this.
as we use AI more, it’s going to keep getting better because AI learns upon user-input using deep-learning model algorithms, specifically LLMs
it’s scary, but it’s a new revolution, and i’m here for it.
You shouldn’t be scared. So long as computers have to, well, compute, they won’t be a threat to humans, since they don’t have the capability of being sentient, and will always be predictable.
You fully have the right to be concerned, there has to be an equal amount of effort going into safeguarding compared to the amount going into development.
If I recall correctly the head of safeguarding at openAI recently resigned, showing that there are already some strange things going on.
In addition to the idea that it can be “shut off” at any time only really works on paper, imagine the AI self-replicates, also shutting down the amount of openAi servers they have at the moment would be a huge operation.
I’m sure openAI has a goal of making the AI feel sentiment and even if the AI doesn’t feel sentiment doesn’t mean it can’t harm humans: Imagine that the task of the AI is to predict humans.
Well. Humans themselves aren’t very good at protecting ourselves and doing what’s best for us, therefore the AI may deem it suitable to seize control of us, for what it sees as being better, which would be saving us.
This topic is much more complicated once you look past the surface.