Okay, let’s be real - AI is taking over the world and we need to be able to take advantage of this. One of the biggest providers of artificial intelligence is OpenAI. OpenAI are the creators of GPT-3 one of the most powerful text models.
Thanks to OpenAI’s latest blog post they have now announced the release of the ChatGPT and whisper APIs.
The releasing of the API means normal developers just like us can use these products ChatGPt and Whisper in our own apps and products, allowing us to create many new cool projects.
Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.
From experience I can assure you whisper is an extremely powerful tool & from a 30 minute video of me speaking it only got 2 words wrong, crazy.
I’m sure most of you are aware what chatGPT is but for those that don’t:
ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now
We are constantly improving our ChatGPT models, and want to make these enhancements available to developers as well. Developers who use the
gpt-3.5-turbo model will always get our recommended stable model, while still having the flexibility to opt for a specific model version. For example, today we’re releasing
gpt-3.5-turbo-0301, which will be supported through at least June 1st, and we’ll update
gpt-3.5-turbo to a new stable release in April. The models page will provide switchover updates.
We are also now offering dedicated instances for users who want deeper control over the specific model version and system performance. By default, requests are run on compute infrastructure shared with other users, who pay per request. Our API runs on Azure, and with dedicated instances, developers will pay by time period for an allocation of compute infrastructure that’s reserved for serving their requests.
Developers get full control over the instance’s load (higher load improves throughput but makes each request slower), the option to enable features such as longer context limits, and the ability to pin the model snapshot.
Dedicated instances can make economic sense for developers running beyond ~450M tokens per day. Additionally, it enables directly optimizing a developer’s workload against hardware performance, which can dramatically reduce costs relative to shared infrastructure. For dedicated instance inquiries, contact us.
AI Is going to become a part of our lives, weather we like it or not & we’re going to have to learn how to adapt. You can expect to see AI assistants start to crop in everywhere.
I really hope you can find a place for AI & let’s great creating.
Let’s watch the next generation unfold!