AI

What Is GPT and Why Is It Yelling at Me?  

A guide written under protest for humans who didn’t sign up for this  

Wednesday, 4:05PM.    

Your boss asks you for a high-level overview on AI.    

Cool.  

He needs it first thing in the morning.  

Cool?  

No.   

But here we are.    

So, off you go to curate everything you could find overnight, chugging cold coffee that wasn’t that great to begin with, and furiously typing about the latest tech buzzwords:   

  • Large Language Model (LLM) – Because “big text robot” doesn’t sound impressive enough.  
  • Generative AI – It makes stuff! Like text, code, or your coworker’s half-plagiarized blog post.  
  • Transformer Architecture – No, not the cool kind that turns into a truck. Just fancy math with a questionable attention span.  
  • Prompt Engineering – The art of talking to the machine nicely, so it actually does something useful.  
  • Hallucination – Settle down, you’re not in college anymore. It’s when GPT lies to you confidently, like a politician on Red Bull, about things that never happened. 

Sighing in binary, you realize this can’t be a dry lecture. It must cut to the chase with a clever grin.  

The Thing Everyone’s Pretending to Understand  

At its core, GPT is a large language model, a math robot trained to read and write like a person.  

It runs on a neural network architecture called a transformer, introduced in 2017.  

Engineers fed it as much of the Internet as they could legally scrape, and it learned to spot the patterns in how we write, argue, joke, and overshare.  

It’s constantly asking, “Given this input, the most likely continuation is…” and then rants on with surprisingly human-like prose, like your girlfriend’s bestie who gets drunk and never shuts up.  

It might also ask if you think it’s pretty, or why it can’t find a decent guy while ugly crying into a cocktail napkin.  

Okay, the robot won’t. But Kimberly will.  

Anyway, while it doesn’t “understand” meaning the way we do, it’s unnervingly good at guessing what words you’ll expect next.  

It weaves “new” text from old patterns, making it feel creative, even though it’s really just echoing the world it was trained on.  

Math, But Make It Witchcraft  

GPT’s engine can be broken into steps:  

Tokens and Numbers: First, it chops up language into tokens (words or parts of words) so it can crunch them. It’s like turning each word into a poker chip of data. These tokens become numbers in the neural network.  

Training on Data: Next, the model is trained on an enormous text corpus. GPT-3 saw hundreds of billions of words from the internet. During training, it’s given sequences of text and learns to predict missing words. They basically gave AI most of every sentence in Shakespeare and asked, “what’s the most probable next word?” It adjusts internal weights each time it makes a prediction, getting better over time at mirroring human word choices.  

Transformer & Attention: The “transformer” part is a clever design that lets the model weigh (or attend to) different words in the input when deciding what comes next. Instead of reading linearly like us, it scans the whole sentence (or paragraph) at once, figuring out which earlier words matter most for the continuation.  

Next-Word Magic: Once trained, generating text is pretty simple: you give GPT a prompt (a few words, a question, anything), and it keeps predicting the “most likely” next token over and over. This keeps going until it satisfies the prompt or hits some length limit.   

The key rule of three here is: predict, refine, and compose.   

It predicts the next token → refines its choices via attention and context → composes a coherent text stream.   

It doesn’t “know” facts like a person does; instead, it echoes patterns.  

It’s statistical alchemy: transforming piles of data into prose.  

Smart Enough to Be Dangerous  

GPT isn’t just an academic toy. It already lives in many apps you (or your co-worker) might use every day.  

For instance:  

Chatbots & Assistants: GPT powers next-gen chatbots that can answer questions, brainstorm ideas, or even role-play. Unlike clunky automated responses of old, these bots can sound surprisingly human. You could ask a GPT-based bot to plan a surprise party, and it will eagerly outline a whole theme, guest list, and even a suspiciously detailed escape plan(?!) before you reel it back in.   

Writing and Social Content: Need a blog outline, a funny tweet, or a LinkedIn post? GPT can spit one out in seconds. Marketers use it for social media copy, writers use it to overcome writer’s block, and poets…well, robots try poetry too, sometimes hilariously bad, sometimes eerily touching.   

Coding Helpers: Tools like GitHub Copilot and Augment embed GPT into a coding editor like VS Code. You type a comment or a few lines, and GPT suggests whole functions or bug fixes, because it’s been trained on mountains of code. It can explain code in plain English or even generate spreadsheets of data analysis.  

Learning & Tutoring: Apps like Duolingo AI Tutor use GPT to teach languages or science. Duolingo recently launched dozens of AI-created courses, boasting it now took a year to do what used to take a decade. Schools experiment with GPT for generating practice problems or pondering Aristotle.  

Weird & Absurd Uses: Because GPT can talk on any topic, people have tried everything. Some make it a quirky game, e.g. role-playing as a medieval bard writing fan mail to Elon Musk. Others document its epic fails: asking it to plan a vacation and getting a route through every abandoned UFO sighting spot in Nevada, or trying to order dessert only to be told, in character, “I’m afraid tiramisu cannot undo your sins, Detective.” 

They can generate text, music, image descriptions, and more on command, like multi-purpose Swiss Army knives…and they’re just as likely to cut you as they are to help open that can of tuna you’ve had stashed in your desk drawer.  

Feels Smart, Acts Weird, Lies Often  

GPT isn’t some shiny toy for Silicon Valley weirdos and productivity influencers; it’s quietly reshaping how we think, write, work, and maybe even feel.  

When you hand over the pen (or keyboard, or mic) to something that never needs sleep and never questions itself, weird things happen. 

Productivity & Creativity: GPT can turbo-charge tedious tasks. Studies show that having ChatGPT help with writing tasks cut completion time by around 40% and even improved the output quality by about 18%.   

Suddenly, that monthly report or email campaign might feel half as painful. But it also means the speed of content production outpaces human editing and fact-checking.   

You click “generate” and trust the result, sometimes too much.  

Creativity & Education: GPT can help writers break blocks, teach students how code works, and assist researchers with summaries. It’s a springboard for ideas. 

But it can also make rote learning feel obsolete. Why memorize grammar rules or synonyms when the robot does it for you? 

This worries educators. Some see students outsourcing too much originality, attention…even basic comprehension. One study found GPT fabricating plausible but unverifiable medical references, which is great if your goal is creative malpractice

So, what happens to learning when every question gets answered instantly? And what happens to thinking? 

Trust & Hallucinations: It feels persuasive, but sometimes it hallucinates and does so confidently. It’ll state made-up dates, fake studies, or non-existent quotes, because statistically, those patterns made sense in the training data.   

Here’s the thing, we humans have a bias toward assuming well-written text is factual…but that can be a problem because beneath GPT’s smooth prose is a remix of half-truths, outdated facts, and confident nonsense.   

People tend to trust AI too much, surveys show many treat AI advice as gospel. That can lead to errors in health, law, or finance if the AI “advises” inaccurately.  

Emotional Relationships: Some users find companionship in chatbots. No, really.     

You might vent to a helpful AI or enjoy its sympathy and witty retorts.   

Some people find comfort in talking to GPT, especially those who feel isolated or anxious. It responds quickly, without judgment, and can maintain a conversation that feels attentive.   

For users who struggle with human interaction, that can make it feel like a safe space to think out loud or practice communication, and it could lead to a massive leap forward for the mental health of society.  

But…this begs a deeper ontological question: what does it mean when a model can imitate emotions like empathy or humor without feeling anything at all?  

GPT might write your next report, check your code, or polish your résumé. That’s both amazing and a little terrifying.   

Yes, it boosts productivity, but it also accelerates content churn and misinformation.  

We know it’s already reshaping how we create: AI-authored novels, generated art, personalized education.   

Importantly, because it runs on patterns, it mostly amplifies the average. It’s raising the bar of mediocrity, but originality?   

That’s still on us.  

What Now?  

Among artists and creators, GPT’s rise has sparked tension and not just because it threatens jobs, workflows or IP. 

It forces a deeper reflection.   

Like a mirror, it shows us that language is patterns and that we’re far more reliant on those patterns than we might like to admit.  

That’s both inspiring and unsettling.   

We’ve built a tool that sparks creativity and speeds up the boring stuff, but it also reveals how much we rely on polished language to sound smart, feel understood, or just seem human.  

What does that say about us?  

Maybe it’s that we’re a species wired for story, hungry for language…even when it comes from a machine.  

We teach it to speak and then pretend it’s listening. But real wisdom isn’t in the replies. It’s remembering why we talk in the first place: to connect, to learn, to wonder.  

So, take GPT’s help, laugh at its weird quirks, but don’t let it ghostwrite your soul.  

Keep asking questions, stay skeptical and with any luck, Kimberly finally finds a date who doesn’t ghost or hallucinate.  

Need more tech explanations that won’t put you to sleep? Follow Artificial Insights on the R3 Blog for analysis that respects your exhaustion.