AI Learning Hub

Foundations

What Is AI, Actually?

Mental models for understanding AI, demystifying LLMs, and mapping the capability landscape.

Deliverable

A personal AI capability map

What Is AI, Actually?

Artificial intelligence is one of the most over-hyped and under-explained technologies in recent memory. Before you can use it well, you need a clear mental model — not the sci-fi version, not the breathless press-release version, but a grounded, practical picture of what these systems actually do.

This module gives you that foundation.


The Core Idea: Pattern Completion at Scale

At their heart, modern AI systems — particularly the large language models (LLMs) that power tools like ChatGPT, Claude, and Gemini — are extremely sophisticated pattern-completion engines.

They were trained on vast quantities of text: books, articles, code, conversations, websites. Through that training, they learned the statistical patterns of human language: what words follow other words, how arguments are structured, what a good analysis looks like, how to write an email.

When you send a message to an AI, it doesn't "look up" an answer in a database or follow a hard-coded script. It generates a response token by token, each one chosen based on what is most likely to continue coherently given everything that came before. The result often looks like reasoning, because it's been trained on so much human reasoning.

What 'Generative' Actually Means

When we say "generative AI," we mean the model generates new text (or images, audio, etc.) from scratch. It's not retrieving stored answers. It's producing novel output that fits the statistical patterns it learned during training. This is why the same prompt can produce different outputs each time — the generation process involves sampling from a probability distribution.


Three Mental Models Worth Keeping

1. The Brilliant Intern

Think of an LLM as a brilliant, extremely well-read intern who has read millions of documents but has never actually held a job. They can draft a memo, summarize a 200-page report, or help structure an argument. But they can also confidently make things up, miss context that any insider would catch, and misunderstand your actual goal if you're not precise.

This framing is useful because it sets the right expectations: high potential, requires management.

2. The Compression of Expertise

Every expert in your industry — every analyst, strategist, writer, coder — has patterns in how they work. They know what to look for, how to structure outputs, what questions to ask. LLMs have compressed a massive amount of that expertise into a single interface.

You're not replacing the expert. You're getting fast, flexible access to patterns of expert-level output — with all the risks that come from working with a brilliant generalist who lacks deep domain accountability.

3. The Interface Layer

AI is increasingly becoming the interface through which you'll interact with software, data, and knowledge. Just as the command line gave way to the graphical UI, we're moving toward a world where natural language is how you query data, draft documents, orchestrate systems, and search for information.

The skill of working effectively with AI is becoming as foundational as knowing how to use Excel or Google.


What LLMs Are Good At

  • Writing and editing: drafting, rewriting, restructuring, adapting tone
  • Summarization: condensing long documents, extracting key points
  • Synthesis: pulling together information from multiple sources
  • Brainstorming: generating options, alternatives, framings
  • Structured analysis: breaking down problems with a given framework
  • Code: writing, explaining, debugging code across dozens of languages
  • Translation: across languages and across register/tone
  • Instruction-following: executing complex multi-step prompts reliably
The Best Use Cases Have One Thing in Common

The tasks where AI shines are tasks with high information input, structured output expectations, and tolerance for iteration. If you'd pay someone smart to do it for a few hours, AI can probably help you do it in minutes — with your oversight.


What LLMs Are Not Good At

  • Real-time information: LLMs have training cutoffs. They don't know what happened last week.
  • Precise arithmetic: they can reason about math, but can make calculation errors without a code interpreter
  • Verified facts: they can hallucinate — confidently stating false information — especially about obscure or specific topics
  • Knowing what they don't know: unlike a human expert, they won't always say "I'm not sure." They'll often generate a plausible-sounding answer even when they should flag uncertainty
  • Long-horizon memory: in a single conversation, they can track context, but across sessions, they don't remember you

Hallucination: The Most Important Risk to Understand

Hallucination is the term for when an AI generates false information with apparent confidence. It's not a bug in the traditional sense — it's an inherent property of how these systems work. They generate probable-seeming text, and sometimes the most probable-seeming text is wrong.

Never Use AI Output Without Verification for High-Stakes Facts

Citation accuracy, financial figures, legal precedents, medical information — these are all areas where AI can and does hallucinate convincingly. Always verify against primary sources before using AI-generated facts in client-facing work, investment memos, or any high-stakes context.

For most knowledge work, this means building a verify-then-use workflow: let AI draft, summarize, or structure — then verify the facts and figures independently.


A Practical Prompt to Try Right Now

Here's a prompt that demonstrates how to use AI to build your own mental model — and shows you something about how the system works.

Build Your AI Capability Map
YOU
I'm a [your role] at a [your firm type]. I want to understand where AI can realistically help me in my day-to-day work and where it can't.

Please give me a structured assessment:
1. The top 5 tasks in my role where AI is likely to save me significant time
2. The top 3 areas where AI would be risky or unreliable without heavy oversight
3. One thing most people in my role misunderstand about how to use AI effectively

Be honest about limitations. I'd rather have accurate expectations than inflated ones.
AI
Here's a realistic breakdown for your role: **Where AI can genuinely accelerate your work:** 1. First-draft research summaries and company backgrounders 2. Structuring and drafting client communications and memos 3. Generating frameworks and checklists for analysis processes 4. Synthesizing call transcripts or meeting notes into action items 5. Drafting boilerplate sections of pitchbooks or reports **Where caution is warranted:** 1. Specific financial figures and data points — verify everything 2. Legal or regulatory interpretations — always involve counsel 3. Predictions and forward-looking statements — AI extrapolates from patterns, not insight **The most common misunderstanding:** People treat AI like a search engine — asking it a single question and taking the answer at face value. The real value comes from *dialogue*: sharing context, iterating on outputs, and treating AI like a collaborator you need to direct carefully, not an oracle you consult once.

Mapping the Capability Landscape

Not all AI is the same. Here's how to think about the major categories:

| Type | What it does | Examples | |------|-------------|---------| | Large Language Models | Generate and understand text | Claude, GPT-4, Gemini | | Image generation | Create images from text descriptions | DALL-E, Midjourney, Stable Diffusion | | Code assistants | Write, complete, and review code | GitHub Copilot, Cursor | | Search-augmented AI | LLMs with access to real-time web search | Perplexity, Claude with web search | | Multimodal models | Process text, images, audio, video | GPT-4o, Claude 3 (vision) | | Specialized vertical tools | AI fine-tuned for specific domains | Harvey (legal), Abridge (medical) |


Key Takeaway

AI is not magic, and it's not a replacement for expertise. It's a powerful tool that compresses and accelerates knowledge work — when used well, by someone who understands both its capabilities and its limits.

The professionals who will get the most out of AI are those who understand the technology clearly enough to know when to trust it, when to verify it, and how to direct it precisely.

Your deliverable for this module: a personal AI capability map — a simple document or table that maps your five most time-consuming tasks to whether/how AI can help. This becomes your reference point throughout the rest of the curriculum.