5 Crucial Things to Know About Openai Chat GPT-4
After a long wait, OpenAI finally released the new Openai Chat GPT-4 model, a major update to the basic framework underlying their wildly popular ChatGPT system, as well as their GPT-3.5 API.
There’s a lot to unpack in the new GPT-4 release. Based on early examples, API documentation and code samples provided by OpenAI, here are five crucial things to know about the new model.
It’s Immediately Available to ChatGPT Plus Users
If you were among the first customers to pay $20 to subscribe to OpenAI’s ChatGPT Plus service, you’re about to receive a significant prize and a significant return on your investment.
The first consumers of the new GPT-4 platform are ChatGPT Plus subscribers. If you have ChatGPT Plus, you can go there right immediately to start using GPT-4.
Many people questioned the rationale behind paying for the Plus service when ChatGPT is already free. Evidently, so that you can quickly obtain updates to new models!
It’s Multimodal
Before GPT-4 was released, raging rumors swirled about whether it would continue to be a text-only model, like ChatGPT, or shift to a multimedia model.
Text, images, and ultimately video can all be used as input and output for multimodal models, which can handle a broad variety of media types.
GPT-4 currently seems to handle images as both inputs and outputs. Initially, only one outside business that is assisting OpenAI with testing image processing has access to this capacity.
More users will have access to images as input once the system is quicker.
OpenAI, however, has some illustrations of how it might eventually develop. One illustration has a picture of eggs and flour along with a question about cooking.

GPT-4 recommends recipes that could be made with the ingredients shown in the picture.

The model could also be used to caption images, or write amazing alt text for images on websites. Video isn’t available yet, but it’s likely on the way now that GPT-4 is multimodal.
There’s an API
Access to the new model’s API will be made available by OpenAI almost instantly. Upon launching GPT-4, the business published a waitlist and stated that some developers would be granted access on launch day.
Expect a lot of businesses to quickly begin incorporating GPT-4 into their goods. Since many are already linked with OpenAI’s pre-existing APIs, making the transition to GPT-4 is simple to do.
It Can Process Way More Data
4,096 tokens could be processed by the initial ChatGPT. That is roughly 5,000 lines of text.
This restriction applied to the output of the system as well as the text in the prompt provided to ChatGPT. Due to the constraints, the system was unable to analyze lengthy documents, create lengthy blog entries, or even write books.
The restriction was probably a cost- or computation-based restriction. A large language model becomes more computationally intensive and consequently more expensive to operate as you add more tokens to the model.
These limitations are significantly increased by GPT-4. GPT-4 can manage 8,000 tokens by default straight out of the box. Additionally, it can accommodate 32,000 coins. It contains roughly 50 pages of writing.
Processing more data will allow the system to process far more instructions, writer longer articles, and perhaps even write very long documents or full literary works.
It’s Better at Human-Like Tasks and Tests
ChatGPT running GPT3.5 could pass human-oriented exams like the bar exam — but only just.
The system tended to get a score on the lower end of passing — a C- or thereabouts. GPT-4 has been trained to perform these human-like tests and tasks much better.
The model now performs as well as a top student on many standard exams. AP Environmental Science exams, GRE exams, and even the LSAT — GPT-4 can easily score in the top 10% of them all.
It’s still terrible at English lit. But tasks like math exams — where GPT-3.5 stuggled — are much improved with GPT-4.
