AI Quickstart

These are the basic definitions that helped quicksart my journey.

What is an LLM?

A Large Language Model (LLM) is a massive neural network (typically a transformer architecture) trained on vast amounts of data to predict the next token in a sequence. Think of it as a massive, non-deterministic, stateless function:

f(prompt_context) -> next_token

It is excellent at pattern recognition, transformation, and generation, but it has no memory of past interactions unless you explicitly pass that history back into the prompt.

What is a token?

Before an LLM can read your prompt, the text must be broken down into smaller chunks called “tokens” and mapped to integer IDs. Tokenization is the parsing step.

A token is a chunk of text, roughly translating to about 4 English characters or 3/4 of a word. That is, a token isn’t always a full word; it is often a sub-word or even a single character.

LLM APIs charge based on tokens. You are billed separately for two things:

  • Input Tokens (Prompt): The text you send to the API, including system instructions, your actual prompt, and any context (like a large document or chat history).
  • Output Tokens (Completion): The text the model generates. Generating tokens is more computationally expensive, so output tokens usually cost significantly more than input tokens.

What is the Context Window?

The Context Window is the model’s working memory for a single API call.

It is a strict, hardcoded limit on the maximum number of tokens it can process at once. This includes both the tokens you send in the prompt and the tokens the model generates in its response.

What is an AI Agent?

An AI Agent is a system that uses an LLM as its “brain” or reasoning engine, but wraps it in logic that allows it to act autonomously.

Instead of just returning text, an agent operates in a loop. It evaluates a prompt, formulates a step-by-step plan, uses external tools to gather data or perform actions, observes the results of those actions, and loops until the core goal is achieved.

What is an Agent Skill?

Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently.

Skills use progressive disclosure to manage context efficiently:

  • Discovery: At startup, agents load only the name and description of each available skill, just enough to know when it might be relevant.
  • Activation: When a task matches a skill description, the agent reads the full SKILL.md instructions into context.
  • Execution: The agent follows the instructions, optionally loading referenced files or executing bundled code as needed.

Scroll to Top