Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

GPT-4: everything you need to know about ChatGPT’s standard AI model

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot originally powered by the GPT-3.5 large language model. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).

What is GPT-4?

GPT-4 is the newest language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which was previously based on GPT-3.5 but has since been updated. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even “learning a user’s writing style.”

GPT-4 Developer Livestream

The longer context plays into this as well. GPT-4 can now process up to 128k tokens of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page. OpenAI says this can be helpful for the creation of long-form content, as well as “extended conversations.”

GPT-4 can also now receive images as a basis for interaction. In the example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. It is not currently known if video can also be used in this same way.

Image used with permission by copyright holder

Lastly, OpenAI also says GPT-4 is significantly safer to use than the previous generation. It can reportedly produce 40% more factual responses in OpenAI’s own internal testing, while also being 82% less likely to “respond to requests for disallowed content.”

OpenAI says it’s been trained with human feedback to make these strides, claiming to have worked with “over 50 experts for early feedback in domains including AI safety and security.”

In the initial weeks after it first launched, users posted some of the amazing things they’ve done with it, including inventing new languages, detailing how to escape into the real world, and making complex animations for apps from scratch. One user apparently made GPT-4 create a working version of Pong in just sixty seconds, using a mix of HTML and JavaScript.

How to use GPT-4

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

GPT-4 is available to all users at every subscription tier OpenAI offers. Free tier users will have limited access to the full GPT-4 modelv (~80 chats within a 3-hour period) before being switched to the smaller and less capable GPT-4o mini until the cool down timer resets. To gain additional access GPT-4, as well as be able to generate images with Dall-E, is to upgrade to ChatGPT Plus. To jump up to the $20 paid subscription, just click on “Upgrade to Plus” in the sidebar in ChatGPT. Once you’ve entered your credit card information, you’ll be able to toggle between GPT-4 and older versions of the LLM.

If you don’t want to pay, there are some other ways to get a taste of how powerful GPT-4 is. First off, you can try it out as part of Microsoft’s Bing Chat. Microsoft revealed that it’s been using GPT-4 in Bing Chat, which is completely free to use. Some GPT-4 features are missing from Bing Chat, however, and it’s clearly been combined with some of Microsoft’s own proprietary technology. But you’ll still have access to that expanded LLM (large language model) and the advanced intelligence that comes with it. It should be noted that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.

There are lots of other applications that are currently using GPT-4, too, such as the question-answering site, Quora.

When was GPT-4 released?

A laptop opened to the ChatGPT website.
Shutterstock

GPT-4 was officially announced on March 13, as was confirmed ahead of time by Microsoft, and first became available to users through a ChatGPT-Plus subscription and Microsoft Copilot. GPT-4 has also been made available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demonstration of GPT-4 was livestreamed on YouTube, showing off its new capabilities.

What is GPT-4o mini?

GPT-4o mini is the newest iteration of OpenAI’s GPT-4 model line. It’s a streamlined version of the larger GPT-4o model that is better suited for simple but high-volume tasks that benefit more from a quick inference speed than they do from leveraging the power of the entire model.

GPT-4o mini was released in July 2024 and has replaced GPT-3.5 as the default model users interact with in ChatGPT once they hit their three-hour limit of queries with GPT-4o. Per data from Artificial Analysis, 4o mini significantly outperforms similarly sized small models like Google’s Gemini 1.5 Flash and Anthropic’s Claude 3 Haiku in the MMLU reasoning benchmark.

Is GPT-4 better than GPT-3.5?

The free version of ChatGPT was originally based on the GPT 3.5 model; however, as of July 2024, ChatGPT now runs on GPT-4o mini. This streamlined version of the larger GPT-4o model is much better than even GPT-3.5 Turbo. It can understand and respond to more inputs, it has more safeguards in place, provides more concise answers, and is 60% less expensive to operate.

The GPT-4 API

As mentioned, GPT-4 is available as an API to developers who have made at least one successful payment to OpenAI in the past. The company offers several versions of GPT-4 for developers to use through its API, along with legacy GPT-3.5 models. Upon releasing GPT-4o mini, OpenAI noted that GPT-3.5 will remain available for use by developers, though it will eventually be taken offline. The company did not set a timeline for when that might actually happen.

The API is mostly focused on developers making new apps, but it has caused some confusion for consumers, too. Plex allows you to integrate ChatGPT into the service’s Plexamp music player, which calls for a ChatGPT API key. This is a separate purchase from ChatGPT Plus, so you’ll need to sign up for a developer account to gain API access if you want it.

Is GPT-4 getting worse?

As much as GPT-4 impressed people when it first launched, some users have noticed a degradation in its answers over the following months. It’s been noticed by important figures in the developer community and has even been posted directly to OpenAI’s forums. It was all anecdotal though, and an OpenAI executive even took to Twitter to dissuade the premise. According to OpenAI, it’s all in our heads.

No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.

Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.

— Peter Welinder (@npew) July 13, 2023

Then, a study was published that showed that there was, indeed, worsening quality of answers with future updates of the model. By comparing GPT-4 between the months of March and June, the researchers were able to ascertain that GPT-4 went from 97.6% accuracy down to 2.4%.

It’s not a smoking gun, but it certainly seems like what users are noticing isn’t just being imagined.

Where is the visual input in GPT-4?

One of the most anticipated features in GPT-4 is visual input, which allows ChatGPT Plus to interact with images not just text, making the model truly multimodal. Uploading images for GPT-4 to analyze and manipulate is just as easy as uploading documents — simply click the paperclip icon to the left of the context window, select the image source and attach the image to your prompt.

What are GPT-4’s limitations?

While discussing the new capabilities of GPT-4, OpenAI also notes some of the limitations of the new language model. Like previous versions of GPT, OpenAI says the latest model still has problems with “social biases, hallucinations, and adversarial prompts.”

In other words, it’s not perfect. It’ll still get answers wrong, and there have been plenty of examples shown online that demonstrate its limitations. But OpenAI says these are all issues the company is working to address, and in general, GPT-4 is “less creative” with answers and therefore less likely to make up facts.

The other primary limitation is that the GPT-4 model was trained on internet data up until December 2023 (GPT-4o and 4o mini cut off at October of that year). However, since GPT-4 is capable of conducting web searches and not simply relying on its pretrained data set, it can easily search for and track down more recent facts from the internet.

GPT-4o is the latest release, of course, and GPT-5 is still incoming.

Alan Truly
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
The Microsoft AI CEO just dropped a huge hint about GPT-5
A photo of Mustafa Suleyman.

The timeline on GPT-5 continues to be a moving target, but a recent interview with Microsoft AI CEO Mustafa Suleyman sheds some light on what GPT-5 and even what its successor will be like.

Mustafa Suleyman on Defining Intelligence

Read more
GPT-5 rumors: everything we know so far
A MacBook Pro on a desk with ChatGPT's website showing on its display.

Following the release of GPT-4o in May, OpenAI announced that it is already training its “next frontier model” that the company hopes will deliver “next-level capabilities” in its continuing efforts to build the world's first Artificial General Intelligence (AGI), a (still hypothetical) generative compute system able to perform a wide variety of cognitive tasks with human-level accuracy.

While achieving that AGI goal remains to be seen, GPT-5 is expected to vastly outperform OpenAI's currently available models in terms of both complexity and efficiency, offering improvements in its natural language processing, content generation abilities, a larger knowledge base, and enhanced reasoning skills. Claude 3.5 Sonnet's current lead in the benchmark performance race could soon evaporate.

Read more
GPT-5 will have ‘Ph.D.-level’ intelligence
OpenAI CTO Mira Murati on stage answering questions.

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities.

In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university.

Read more