What’s New In GPT-4 And How Does It Compare To Its Predecessor

Yesterday, Open AI, the company behind the groundbreaking and frankly epoch-changing Large Language Model, ChatGPT, or GPT 3.5, announced the launch of its latest rendition termed GPT 4.

During the launch ceremony Sam Altman, the CEO of Open AI claimed that GPT 4, even though based on the same core technology, is a milestone in the latest advancements in machine/deep learning.

And Sam is not the only one igniting the hype, according to experts, Chat GPT which has already impressed the hell out of everyone who has experienced it and has sparked fears of job loss in everyone from writers to coders was based on a year-old technology which cannot be termed cutting edge.

While GPT 4 is so cutting edge that according to the Washington Post, it is all set to, ‘Blow Chat GPT out of the water.’

Sadly, unlike its predecessor, which was freely available to users worldwide, access to GPT 4, hopefully for the time being, is limited to premium users who are willing to pay 20 Dollars a month for GPT’s assistance.

Even with ChatGPT, users of the free version were limited by the number of servers available to them. In other words, free version users could not access ChatGPT at will and instead had to wait in long queues during peak usage hours to talk to the Open AI’s hyper-intelligent bot.

Moreover, waiting in these long queues entailed that if you lived in rural areas and relied on slug-paced internet services like DSL connections instead of upgrading to more advanced satellite-based providers like HughesNet Internet, you were missing out on the AI revolution.

During the launch ceremony company sources claimed that it took them well over 6 months’ worth of efforts and deep learning supported by special supercomputers created in collaboration with Microsoft, a Ten Billion Dollar investor in Open AI, to bring the GPT 4 to life.

So, let’s take a look at what the super geniuses at Open AI achieved in the last 6 months.

  1. MULTIMODAL MODEL: THE INTRODUCTION OF IMAGE-BASED PROMPTS

Most of the changes introduced to GPT 4 are generally subtle in nature and comprise updates to existing features of the Chatbot. The introduction of image-based prompts is perhaps the only fundamental and most significant update to the overall experience.

As the title suggests, GPT can now read images as efficiently as it read text prompts, hence making it a Multimodal model or as Sam, the CEO of Open AI put it, a vision model in addition to a language model.

And even though the feature is still in its beta stage and is being tested in collaboration with a partner called ‘be my eyes’ you can probably imagine the endless possibilities that arise out of it.

In other words, GPT can now read entire documents comprising both text and image-based content.

Consider this extremely cool example uploaded to Open AI’s own website, users can now upload an image of a buffet food item to GPT and ask it what the possible ingredients are, and not just that, once GPT has figured out the ingredients the user can then ask the bot to suggest other dishes that can be prepared using this these ingredients.

And that is not all, the chatbot takes reading images one step further. Users can literally hand draw layouts for entire websites and ask GPT to convert it to front-end code, makes a lot of entry-level programmers obsolete right?

The bot can not only read but also interpret images. So, for instance, you can show it an image of a person sitting under an apple tree and ask it about possible scenarios that could occur if an apple was to fall off.

I could go on and on about the implications, but I think you get the point.

So, let’s move on.

  1. BETTER, FASTER & SMARTER

As is the case with any new edition GPT 4 builds upon the capacities of its predecessor and takes them to an absolutely new level.

  • INCREASED WORD PROCESSING CAPACITY

GPT 4 can now process prompts of up to twenty thousand words, which is normally the size of a graduate-level thesis. In other words, you can upload half a book to the bot and ask it to summarize it for you in let’s say a thousand words. For reference, this word processing capacity is 8 times more when compared to GPT 3.5.

  • BETTER RESULTS ON COMPETITIVE EXAMS

GPT 4 is also smarter, Open AI claims on its website that its newer creation can not only pass the bar exam, which is a very low bat, to begin with, but can also attain scores in the top 10 percent category. GPT 3.5 on the other hand was in the lowest 10 percent category.

  • FEWER HALLUCINATIONS

Sam’s latest brainchild is also less of a liar when compared to its predecessor. GPT 3.5 was famous amongst users for not only making up facts but also stubbornly sticking by them. Something which AI developers have called AI hallucinations.

GPT 4 was especially tested for these so-called hallucinations or lies in simple words, and it scored a whopping 40 percent better. But there is still a great deal of improvement required in this regard. Developers warn that the bot is still prone to lying owing to its training data limitations but at least it will do a lot less of it.

  1. NEW COLLABORATIONS

Microsoft was perhaps the first tech giant to realize the immense potential of ChatGPT and jumped on the bandwagon early on. The veteran corporation collaborated with the founders of ChatGPT to re-vamp their dying search engine service, Bing. Sydney, that is what the Artificially intelligent search engine is called, is going through Beta testing and a version is also available to a select number of users.

During the launch ceremony of GPT 4 Open AI announced a new set of collaborations with other tech companies ranging from the language learning service Duolingo, which will not incorporate the AI to enhance users’ learning experience with the French and Spanish languages.

Moreover, the payment processing service Stripe will now be using Open AI technology to combat fraud on this platform.

And checking Sam’s tone during the launch event it’s obvious that Open AI has his eyes set on several other joint projects in the future.

REFERENCES 

https://www.bbc.com/news/technology-64959346

https://www.theguardian.com/technology/2023/mar/14/chat-gpt-4-new-model

https://www.bemyeyes.com/blog/introducing-be-my-eyes-virtual-volunteer

Leave a Comment