Pemerintah melalui Kementerian Keuangan (Kemenkeu) tahun ini kembali menganggarkan dana perlindungan sosial, yang memberikan perlindungan bagi masyarakat dalam kelompok rentan ekonomi dan tergolong rentan. Kamu bisa mendaftar atau mengecek namamu sebagai penerima bansos dengan mengklik link dibawah:Â
There’s a lot to unpack in the new GPT-4 release. Based on early examples, API documentation and code samples provided by OpenAI, here are five crucial things to know about the new model.
It’s Immediately Available to ChatGPT Plus Users
If you were among the first customers to pay $20 to subscribe to OpenAI’s ChatGPT Plus service, you’re about to receive a significant prize and a significant return on your investment.
The first consumers of the new GPT-4 platform are ChatGPT Plus subscribers. If you have ChatGPT Plus, you can go there right immediately to start using GPT-4.
Many people questioned the rationale behind paying for the Plus service when ChatGPT is already free. Evidently, so that you can quickly obtain updates to new models!
It’s Multimodal
Before GPT-4 came out, there was wild speculation about whether it would remain a text-only model like ChatGPT, or would become a multimodal model.
Multimodal models are able to work with a wide range of media types as both their output and input — everything from text to images and ultimately video.
At the moment, GPT-4 appears to support images for both its inputs and outputs. Initially, this capability is available only to a single third-party company which is helping OpenAI test image processing.
Once the system is faster, images as input will become available to more users.
But OpenAI has some examples of how it could ultimately play out. One example includes a photo of eggs and flour, with a cooking-related query.

GPT-4 recommends recipes that could be made with the ingredients shown in the picture.

The model could also be used to caption images, or write amazing alt text for images on websites. Video isn’t available yet, but it’s likely on the way now that GPT-4 is multimodal.
There’s an API
Access to the new model’s API will be made available by OpenAI almost instantly. Upon launching GPT-4, the business published a waitlist and stated that some developers would be granted access on launch day.
Expect a lot of businesses to quickly begin incorporating GPT-4 into their goods. Since many are already linked with OpenAI’s pre-existing APIs, making the transition to GPT-4 is simple to do.
It Can Process Way More Data
4,096 tokens could be processed by the initial ChatGPT. That is roughly 5,000 lines of text.
This restriction applied to the output of the system as well as the text in the prompt provided to ChatGPT. Due to the constraints, the system was unable to analyze lengthy documents, create lengthy blog entries, or even write books.
The restriction was probably a cost- or computation-based restriction. A large language model becomes more computationally intensive and consequently more expensive to operate as you add more tokens to the model.
These limitations are significantly increased by GPT-4. GPT-4 can manage 8,000 tokens by default straight out of the box. Additionally, it can accommodate 32,000 coins. It contains roughly 50 pages of writing.
That’s about 50 pages of text.
Processing more data will allow the system to process far more instructions, writer longer articles, and perhaps even write very long documents or full literary works.
It’s Better at Human-Like Tasks and Tests
ChatGPT running GPT3.5 could pass human-oriented exams like the bar exam — but only just.
The system tended to get a score on the lower end of passing — a C- or thereabouts. GPT-4 has been trained to perform these human-like tests and tasks much better.
The model now performs as well as a top student on many standard exams. AP Environmental Science exams, GRE exams, and even the LSAT — GPT-4 can easily score in the top 10% of them all.
It’s still terrible at English lit. But tasks like math exams — where GPT-3.5 stuggled — are much improved with GPT-4.
