AI in Plain English

You hear about Artificial Intelligence everywhere, but the jargon can be confusing. Here is a breakdown of the key concepts, explained simply.

What is AI?

Think of AI as a computer system that can do tasks that usually require human intelligence. This includes things like recognizing a face in a photo, understanding spoken commands, or playing chess.

More details on Types of AI

Most AI we use today is called "Narrow AI". It is very good at one specific task (like recommending movies or filtering spam) but cannot do anything else. It isn't actually "smart" like a human.

The science fiction concept of AI that thinks and feels like a human is called "General AI" (AGI), and it does not exist yet.

Machine Learning (ML)

Traditional computer programs follow a specific set of rules written by a human (e.g., "If user clicks button, open window").

Machine Learning is different. Instead of programming specific rules, we give the computer a lot of data (like thousands of pictures of cats) and let it figure out the patterns itself. It "learns" by example.

Why does Data matter?

Since the computer learns from examples, the quality of those examples is critical. If you teach an AI using only pictures of orange cats, it won't recognize a black cat.

This is where Bias comes from. If the data used to teach the AI is incomplete or prejudiced, the AI will make prejudiced decisions. This is often summarized as "Garbage in, garbage out."

Generative AI

This is the type of AI behind tools like ChatGPT or Midjourney. While older AI could mainly analyze existing data (e.g., "Is this email spam?" or "Is there a stop sign in this photo?"), Generative AI creates new content.

It can write a poem, generate an image of a sunset, or write computer code. It does this by predicting what should come next based on everything it has learned from the data that it was given.

Think of it like a master chef: A chef creates a new dish not by copying a single recipe, but by understanding how ingredients work together based on thousands of recipes they have studied. The chef understands that sugar makes things sweet and lemon makes things sour. Generative AI does the same with data—it mixes the patterns it learned during training to create something "new" that follows the rules of what it has seen before.

The Hallucination Problem

Generative AI works like a very advanced autocomplete on your phone. It predicts the next likely word.

Because it is just predicting words based on probability, it doesn't actually "know" facts. It can confidently state things that are completely false. When an AI makes things up, this is called a "Hallucination."

Always double-check important information no matter how confident the AI may sound in their output/reply.

External Resource: AI for Everyone

Coursera offers a non-technical course by Andrew Ng that explains AI concepts for business and society.

View Course →