Understanding the ChatGPT Architecture: A Comprehensive Guide

Leading a generation of AI-based tools, ChatGPT has emerged as the clear winner of the last five years. Toppling major tech giant-developed virtual assistants like Siri and Alexa, the all-new AI-conversational chatbot is built by OpenAI, a San Francisco-based startup. Elon Musk, Reid Hoffman, Peter Thiel, and Sam Altman are among the famous tech celebrities who initially invested in the company upon its inception in 2015.

The AI chatbot is eyeing more significant investments in 2023, with Microsoft pledging $10 billion in the start-up and bringing its valuation to a whopping $29 billion. With more tech giants looking forward to exploring the AI tool, ChatGPT may pose an imminent danger to Google’s $149 billion search engine business.

From generating code and creating original content to writing research paper abstracts and school essays, ChatGPT has taken almost every industry by storm. But before we get into the industry usage of this tool, let’s first try to understand the mechanism behind the tool. Read on to learn about the architectural detail of this OpenAI-built tool.

Unleashing AI capabilities with ChatGPT

ChatGPT is built on the fundamentals of its sibling model InstructGPT developed by the same parent company, OpenAI. Instruct, or InstructGPT, was built as an extension of the GPT-3 model.

GPT (Generative Pre-training Transformer) is an LLM (Large Language Model) employed to perform various NLP (Natural Language Processing) tasks like translating, language modeling, answering queries, writing codes, and summarizing texts.

Initially launched as a free prototype on November 30, 2022, ChatGPT’s overnight success among content creators, students, and even Silicon Valley professionals led OpenAI to monetize the conversational chatbot’s immense popularity.

Although the free version is still available, the startup has launched ChatGPT Plus, on the more advanced GPT-4 model, with a monthly subscription of $20. It offers a horde of benefits to its subscribers, including general access even during peak times, faster response, and priority access to new features and improvements, among others.

While InstructGPT is trained to follow instructions in prompts and give detailed responses, ChatGPT has been optimized to “answer follow-up questions, admit its mistakes, challenge incorrect premises, and even reject inappropriate requests.”

The ChatGPT build-up plan

Currently evolved into the fourth generation of Large Language Model (LLM), the conversational AI chatbot was built on GPT-3, the third LLM generation. This GPT-3 model is structured to replicate the working of the human brain. Crafted as a neural network, it simulates data processing like humans. It is the largest language model to date, with 175 billion parameters.

Understanding GPT

Generative Pre-training Transformer or GPT is the brainchild of OpenAI, created to provide fast and detailed answers to user queries irrespective of the genre or area. To put it in simple terms,

G – ‘Generative’ implies a producer or initiator

P – ‘Pre-trained’ means it does not require separate training.

T – ‘Transformer,’ a machine learning (ML) model used to identify content. 

The GPT-3 model includes semi-supervised machine learning algorithms. This NLP project is pre-trained to comb through an immense data set formed with documents and resources written by humans over time.

Transformer – The “T” in ChatGPT

The transformer architecture was first introduced in a 2017 paper by Google researchers. The transformer architecture is conceptualized on self-attention, a neural network primarily used for NLP tasks like text generation, translation, and summarization.

It comprises an encoder and a decoder, each consisting of multiple self-attention layers and feed-forward neural networks. It allows the model to prioritize tokens (words in a text) in a sentence sequence to generate outputs.

The ChatGPT working mechanism

The state-of-the-art AI program works differently than the standard chatbots. The chatbots used in various platforms and applications attempt to blindly guess a user’s next words without considering the context. ChatGPT, on the other hand, attempts to understand the input prompt before spewing out words that predictably best answer the user’s query based on its training data pool.

Pre-training the chatbot

Unlike other AI language models, which require supervised training and human assistance to verify outputs, ChatGPT is trained on a non-supervised learning method. It allows the AI program to perform a string of tasks simultaneously. The output is not dependent on a specific input, allowing programmers to keep adding information to the data set.

Holding and maintaining conversations like humans, ChatGPT was initially trained on 500 billion tokens that let the chatbot assign meaning to a text and predict follow-up texts easily.

Tokenization

This process pertains to breaking down the text input comprising complex words and sentences into small tokens to be analyzed.

The tokens are part of the massive collection of human-written data, including documents, texts, books, essays, articles, archives, and more. This data is sourced from the enormous internet pit of human knowledge.

Transforming the input

The transformer model, as mentioned earlier, is a neural network consisting of multiple sub-layers. The model primarily has two sub-layers ━ a) the self-attention layer and b) the feedforward layer.

While the self-attention layer weighs the importance of each word or token in a sentence sequence, the feedforward layer applies lateral transformations to the input prompt.

The deep-learning neural network then identifies patterns and relationships in the text to create human-like responses.

RLHF – The backbone of ChatGPT

Reinforcement Learning with Human Feedback (RLHF) is the technique of refining and optimizing language models for better user experience. It involves continual training of multiple models at different stages.

Apart from pre-training the model, it includes training a reward model and fine-tuning the LM.

Pre-training LM: The starting point can be fine-tuned by adding more text and conditions to understand language structure and patterns better.

Training reward model: In this stage, the model is trained through a set of input-output pairs. The model receives rewards or penalties based on its predictive analysis. Human annotators grade the output based on coherence, relevance, fluency, and other criteria.

The feedback from the reinforced learning is then incorporated into the learning process to optimize the original LM (language model). This further improves its performance in tune with human preference.

Fine-tuning LM: In this stage, programmers fine-tune the parameters of the initial LM based on various factors. This uses reinforcement learning algorithms like PPO (Proximal Policy Optimization).

The policy generates an output based on the PPO model, for which the reward model attributes a reward. Using PPO, the reward is then used to update the policy.


Source::https://openai.com/blog/chatgpt/

Wrapping up

Boasting 1 million users within the first week of its launch, ChatGPT is projected to earn a whopping $200 million in 2023 and $1 billion in 2024. The AI chatbot’s many features include automation, personalization, multi-language support, and the ability to scale, paving the way for companies to reap the profits of humans’ long-cherished dreams.

However, its overnight fame of setting the record for reaching 100 million users in the shortest span has its challenges. Its breadth of power may be under exploration, but its depth is yet to be seen. And it remains to be seen if ChatGPT could serve as a leading example for future advancements across various industries.

We hope this blog helped you understand the mechanism behind ChatGPT. In our next blog, we will talk about the industry-specific uses of this tool. Till then, keep learning!

Leave a Comment

Your email address will not be published. Required fields are marked *