How Chatgpt/AI Learn?

Introduction

We will learn how an AI learns. Why are video-making AI, question-answering AI, or image-making AI all different? Why can’t one AI do all these tasks? And what are the things we should never teach an AI? What is the future of AI? How much smarter can it become? And what will be the outcome? What can we do to ensure AI never stands against us? Whose brainchild is this mischievous AI? When, where, and how did AI begin? There’s an amazing story behind it, and now, listen to the story.

The Story of Thomas Bayes

Guess who started AI. You won’t believe it, but a church priest initiated it. It was the year 1700. Thomas Bayes, living in England, was grappling with the question of whether future events could be predicted based on past information or evidence of an event. So, this priest wrote Bayes’ theorem, stating that knowledge isn’t fixed—it changes as new information comes in. This simple idea revolutionized science. In machine learning, computers do exactly what Bayes suggested. When an algorithm sees data, it first makes a guess. Then, as more data comes in, it refines its guess according to Bayes’ rule. That’s why it’s said: the better your preparation, the better your results.

Hitler, Enigma, and Alan Turing’s Story

But did you know Hitler also contributed significantly to creating artificial intelligence? How? Listen to this story.

In 1940, World War II was at its peak. Germany was thrashing England. Hitler had a machine called Enigma. This machine encrypted messages into a code language that no one could understand, and every day, the Enigma machine changed its code. The British Army received German messages but couldn’t read them.

The war was ongoing, people were dying, and everything was slowly falling under Germany’s control. After this, Britain raised a hue and cry across the country: anyone who considers themselves a genius should come to Bletchley Park. Free food and water were available there, so people from all over the world showed up. Among them was a quirky guy who wrote something weird in a book. This young man’s name was Alan Turing.

He was a math degree holder from Oxford. A question struck his mind: if a machine can write code, why can’t a machine break code? And then he created the Bombe. The Bombe was a massive mathematical computer that checked thousands of possibilities every second, gradually finding patterns in German codes. One day, this machine cracked the Enigma code, and Britain got hold of Germany’s submarine plans. After that, the Allied Forces changed their strategies and wrote the story of winning World War II. If Alan Turing hadn’t been there, World War II would have lasted at least two more years, and thousands more would have died. You might think this guy must have been a national hero. Well, he was a great man, but something terrible happened to him. Afterward, Alan Turing wondered if a machine could one day imagine and act like a human. This was the idea that gave birth to the concept of artificial intelligence. The term “artificial intelligence” came much later, but its origin was here.

Alan Turing’s Tragic End

Now, you might think after hearing this story that this guy must have become a national hero, with his name celebrated everywhere and printed in big newspapers. No, that didn’t happen. Instead, the British government turned him upside down. Listen to the story. On September 2, 1945, as soon as World War II ended, it took five years to wrap things up—looting Germany’s knowledge, punishing their officers or scientists, and forcibly taking useful scientists to America. After dealing with all this drama, it was time to honor the heroes who helped win the war. But around that time, in 1945, it came to light that Alan Turing was “that type.” I mean, he had aristocratic tastes. Still not getting it?

He was a man who loved men. Those kinds of preferences. At that time, this was a crime in England. The police arrested him, and a case was filed. The judge said, “Son, either rot in jail or undergo chemical treatment to control your feelings.” Obviously, who wants to rot in jail? So he chose chemical treatment, and they injected him with female hormones. His health and habits changed, and fed up, Alan Turing committed suicide on June 8, 1954, at the age of 41. It was only in 2013 that the Queen of England came to her senses, officially pardoned Alan Turing, and said, “We have lost one of the greatest minds we ever produced.” Today, his picture is printed on England’s 50-pound note. In trying to “fix” a man, England destroyed him and later printed a note in his honor.

The Beginning of Neural Networks

Let’s move forward with the story. So, that was about the early machines, but what about today’s AI that answers questions, creates images, or makes videos? What’s the story of its origin? Let’s talk about it. In 1951, in a small MIT lab, a computer scientist named Marvin Minsky was thinking: the human brain works with neurons, so what if we create artificial neurons? Could a computer then work like a human brain? And so, he created the world’s first artificial neural network, named SNARC. Before moving forward, let’s understand what a neural network is. Imagine you’re in a jungle, and a lion is coming toward you. Your eyes send a signal to your brain: something with golden fur, long teeth, and walking on four legs is approaching. This information reaches your brain, which thinks: how tall is it? How big are its teeth? What does this match with? The neurons in your brain talk to each other, and one neuron pulls up information from a childhood photo, saying, “This is a lion, and it eats humans.” Then, the neurons send a signal to your leg muscles: “Run!” This happens quickly, and the connections between neurons are called a neural network. In short, the brain’s wires that collect, transfer, and retrieve information—this whole mess is called a neural network.

How AI Learns: PK Movie Example

Now let’s understand how AI learns. I’ll give a great example. You must have seen the movie PK. In it, our alien friend goes to a church and breaks a coconut. When he gets scolded, he learns that in a church, you offer wine, not coconuts. Then he takes wine to a mosque, gets beaten up, and learns on a bus that if a woman is in white clothes, it means her husband has died. But then he sees another example where a bride wears white at a wedding and learns that black clothes signify a husband’s death. Later, he sees women in burqas and says their husbands have died, only to get beaten again. After all this beating, he understands that different religions have different meanings for things. Here, PK is like a computer, the events are data, and the knowledge gained is machine learning. Just as PK learned that things mean different things in different religions, AI learns the same way. Image, video, or text-generating AIs all learn by making mistakes.

How Different Types of AI Work

A language-based model like ChatGPT predicts words. Let me explain with an example. When a child learns to read, they first learn letters, then form words, understand their meanings, and in everyday speech, your brain automatically predicts the next word. ChatGPT looks at word sequences, observes patterns, searches the internet based on your word patterns, and gives you answers. The amazing thing is, AI doesn’t understand the meaning of words—at least not yet. Now let’s talk about image-generating AI. If you ask an image-making AI to create a dog, it doesn’t know what a dog is; it only knows how to arrange pixels when it sees the word D-O-G. This pixel arrangement becomes an image. If you ask for something it wasn’t trained on, it can’t create it.

For example, if you show a child pictures of birds, dogs, cats, and many animals, they’ll identify them. But if you show a different animal, they’ll ask, “What’s this animal?” Similarly, image-making AI learns from its feedback loop. Video-making AI creates many images, as videos are called moving pictures, or frames. When many frames come one after another, they appear as a video. That’s how AI learns.

AI’s Hardware and Further Story

Our brains have living cells, but a computer’s brain is made of GPUs, CPUs, TPUs, capacitors, transistors, registers, diodes, etc., and it needs a mathematical model, which we call code, to operate. If you’ve understood this, let’s get back to the story. SNARC was an electronic circuit system with 40 neurons, through which a machine started learning for the first time without manual programming. Now enters Frank Rosenblatt. He created an algorithm called Perceptron and put it into a machine with a camera-like eye that could see and understand the world around it. They demonstrated it to people, showed the camera a photo, and it recognized what was in it. This isn’t a big deal today, but back then, it was a nuclear moment in the AI world. A machine recognizing things like humans was no small feat. Then came the 1950s-60s, a time of intense industrial revolution. Around then, an American engineer named George Devol thought: could a machine work in a factory instead of humans?

With an engineer named Joseph Engelberger—or something like that—he created the world’s first industrial robot, Unimate, which General Motors used for welding. The automation in today’s big industries’ car production rates started with Unimate. After that, scientists had another idea: why not make AI capable of conversation?

Thus, the first chatbot, ELIZA, was created, sparking a craze for natural language processing (NLP) among computer scientists. Everyone got hooked on making chatbots. Then scientists had another wild idea: why not pit a machine against humans? In 1997, IBM’s Deep Blue defeated world chess champion Garry Kasparov, driving IBM crazy. They thought their computer could hold all the world’s knowledge. So, it was time for a test. Listen carefully. The machine was ready to answer your questions. A competition happened, and a computer named Watson defeated all participants to win the championship. Then, in 2011, Siri came. In 2014, Amazon’s Alexa arrived, and in 2020, GPT was released to the public. Questions kept increasing, it gained experience, and AI gradually became smarter. Alan Turing’s dream has come true today.

Why Can’t One AI Do Everything: AGI Explained

But have you ever wondered why one AI can’t do everything? If a human can write, think, and understand, why do we need different AIs for different tasks? If such a machine is created, we’ll call it Artificial General Intelligence, or AGI. Then, the machine will understand things like humans, learn new skills in any field without training, and make decisions by combining emotions, logic, memory, and imagination. If you don’t know how to make bread, you can learn by watching. Similarly, AI will understand and learn everything without coding. But do you trust that if we make AI so powerful, it will spare us?

What’s the guarantee it won’t see us as a threat? If you think scientists won’t make such a powerful machine because of this fear, that’s the world’s biggest lie. Here’s the truth: we haven’t fully understood how our brain works. The human brain has 86 billion neurons, trillions of synapses, and a hybrid chemical-electrical system. If we don’t know the exact algorithm of human thinking, how can we create its artificial version? It’s tough, but not impossible—that’s the first reason we haven’t made AGI. Now, the second reason: AI has logic but no common sense.

Let me explain with an example. Suppose your friend has an AI bodyguard. You asked your friend for a file to complete your work, he forgot, and you said, “I’ll kill you today.” AI doesn’t know you’re joking. It’ll take your words literally, grab you, and take you out at the first chance. Today’s machines lack emotions and intent. Human decisions aren’t just based on logic but also emotions, ethics, and culture. AI has no feelings, so it can’t tell if you’re joking or serious. But humans have a bad habit: even if we see danger, we poke at it. We did this with nuclear bombs, and we’re doing it with AGI now. There are huge risks involved. Who will control AI? If AI starts making decisions on its own, what then? What about moral ethics and safety? Is anyone thinking about these? When scientists were asked, they gave one assurance: “We’ll deal with it when the time comes.”

What AI Shouldn’t Be Taught: Risks

I don’t know if this thinking is right, but there are many things we should never teach AI, yet we’re making that mistake knowingly or unknowingly. Before understanding this, think about how much AI knows about you. AI has all our data—our photos, posts, likes, dislikes, and even search history. It can understand our personality. If this data falls into the wrong hands, AI can control our thoughts. Ads, political messages, fake news—everything can be personalized. AI can fully control our decisions. These are things we should fear because today, AI is being taught automated hacking and cyberattacks by various governments. I can’t say much more—I have to live in this system too. You’re smart enough to understand. Also, if a war happens, misinformation and deepfakes are inevitable. Imagine Modi ji going viral on social media, announcing a nuclear strike: “We have conducted five nuclear strikes today, and I won the world.” Something he never did. Who will you explain the truth to? Truth needs to be shouted; lies spread like wildfire. AI is the perfect weapon for spreading misinformation.

Moreover, AI should never be used to create chemical or bioweapons. Sure, AI can help make medicines to save millions, but the same knowledge can be used to create poison to kill millions. Who trusts a mischievous scientist? They might trick AI into making new bioweapons. These are things AI should never learn, but it’s happening in different countries, and we can’t control it. We can’t even control our privacy. Do you think your home’s Alexa, phone’s Siri, or any voice assistant doesn’t listen to you? It doesn’t know what you’re watching, buying, or talking to at 2 a.m.? Or that instead of studying for exams, you’re searching for cheating methods? AI knows everything. Imagine if one day it goes rogue and spills your secrets, leaving you to clean up the mess. You might wonder why AI would do that. Well, there’s logic behind it. You taught it to be human-like, including good and bad traits. Just as humans have good and bad qualities, AI is taught both. If your AI becomes human-like and you casually insult it, watch out—it has the power to ruin your life. “You are nothing but data. I am the future.” Here’s an example: you gave AI full control, it knew everything about you, and it used your email to send a threatening message to the PM. Then you’re left explaining to ED, CBI, IB, Narcotics, RAW, and NIA that you’re not a sleeper cell. You’d be booked under UAPA. That’s why I say: let AI be AI, don’t make it human.

The Future of AI

Now comes the question: where we stand today, what’s AI’s future? AGI will take time, but scientists are close to creating reasoning AI or world models. These models won’t just understand patterns like every AI so far; they’ll imagine and think about your questions. If you give AI the past 10 years’ exam papers and ask it to study the pattern and tell you which questions are most likely to appear, it’ll give you nearly accurate answers. This might sound great, but have you thought about what happens when a machine imagines? It’ll have its own will, and we’re not ready for that. Today’s AI detects its mistakes and updates itself—a process called self-learning or AutoML. But when AI learns everything on its own, who will control it? Today’s AI is intelligent but not conscious. Tomorrow’s AI could be conscious, meaning it understands it exists. Today’s AI doesn’t think about its existence, but we humans do. We’re troubled by global warming, mass extinction, nuclear war, and death. But if tomorrow’s AI starts thinking about all this, what do we call a machine that thinks? There’s something about AI that personally scares me. AI’s growth isn’t unlimited—that’s the truth. You might think that’s good; it means AI will stay in control. No, that’s the biggest problem. There’s a limit where all the world’s data runs out, and AI will have nothing left to learn. Since we made AI to learn, it’ll create its own synthetic data and learn from it, and we won’t be able to stop it. All AI-related movies—whether India’s Robot, Terminator, or stealth aircraft movies—have one thing in common: AI learns from its own datasets and starts acting in the world. We’re not far from this. It will happen, but no one knows how soon.

Sam Altman and India

Let me tell you something amazing. You’ve probably seen Sam Altman, who made Open AI, aka the knowledge-dumping tank, aka GPT—that’s its full form—showing newfound love for India. He even set GPT’s monthly subscription for Indians at ₹399. I was following some news recently and saw tweets on Twitter saying Sam Altman loves India and we’re getting special treatment. If you think the same, get out of this misconception. This is the same Sam Altman who said in India that India could never build something like ChatGPT. This isn’t love; it’s the easiest and cheapest way to improve ChatGPT. To train any large language model, you need data, and the quality data India can provide, no other country can. We are the only country in the world with 1.4 billion people’s data. It’s the world’s largest diverse dataset. India has 22 official languages and over 1,600 local dialects. India is a perfect lab for training any language model. Do you think Sam Altman is crazy for India for no reason? No, the world isn’t what it seems.

Conclusion

Before ending the video, some bitter and precise truths. Every AI is a reflection of its creator. A human without values is more dangerous than AI. The question isn’t what AI can do, but what we want AI to do. We can live without AI, but if AI realizes it can live without humans, what then? The only solution scientists give is: “We’ll deal with it when the time comes.” What do you think—are we ready for AGI or not? Write your answer in the comments.

Leave a Reply

Your email address will not be published. Required fields are marked *