deep learning llms25 min

Prompt Engineering

Getting the best output from LLMs through structured, intentional input

0/9Not Started

Why This Matters

A language model is a powerful engine, but prompt engineering is the steering wheel. The same model can give you a brilliant answer or a terrible one depending on how you ask. This is not just about being polite — it is about understanding how the model processes your input and structuring it for the best possible output.

Prompt engineering is the most accessible AI skill: you need no GPU, no training data, and no machine learning degree. Yet it is the difference between a junior developer who types "fix my code" and a senior engineer who gets production-ready solutions on the first try.

Define Terms

Visual Model

System PromptRole + rules
User PromptContext + instruction
Few-Shot Examples2-3 input/output pairs
LLMProcesses all tokens
Format SpecJSON / code / steps
ResponseStructured output

The full process at a glance. Click Start tour to walk through each step.

A well-structured prompt moves from role definition through context and instructions to a format specification, producing reliable output.

Code Example

Code
// Prompt engineering patterns as code

// 1. Zero-shot: just ask directly
const zeroShot = `Classify this review as positive or negative:
"The food was amazing and the service was excellent!"
Classification:`;

// 2. Few-shot: provide examples first
const fewShot = `Classify reviews as positive or negative.

Review: "Great product, works perfectly!"
Classification: positive

Review: "Terrible quality, broke after one day."
Classification: negative

Review: "The food was amazing and the service was excellent!"
Classification:`;

// 3. Chain-of-thought: ask the model to reason
const chainOfThought = `Solve this step by step:

A store has 45 apples. They sell 2/3 of them, then receive
a shipment of 30 more. How many apples do they have?

Let me think through this step by step:
1.`;

// 4. Structured output: specify format
const structuredOutput = `Extract entities from this text and return as JSON.

Text: "John Smith works at Google in Mountain View."

Return format:
{"people": [...], "companies": [...], "locations": [...]}

JSON:`;

// 5. System prompt pattern
const systemPrompt = `You are a code reviewer. For each code snippet:
1. Identify bugs
2. Suggest improvements
3. Rate quality 1-10
Always be specific and reference line numbers.`;

console.log("=== Prompt Patterns ===");
console.log("Zero-shot:", zeroShot.split("\n").length, "lines");
console.log("Few-shot:", fewShot.split("\n").length, "lines");
console.log("Chain-of-thought:", chainOfThought.split("\n").length, "lines");
console.log("Structured:", structuredOutput.split("\n").length, "lines");

Interactive Experiment

Try these exercises:

  • Take a prompt that gave you a mediocre response from an LLM. Add a system role, 2 examples, and a format specification. Compare the outputs.
  • Ask an LLM to solve a math problem with and without "think step by step." Compare accuracy.
  • Try different temperature settings (if available) for the same creative writing prompt. How does output variety change?
  • Write a prompt that asks for JSON output. Then deliberately include conflicting instructions. How does the model handle the contradiction?

Quick Quiz

Coding Challenge

Build a Prompt Template

Write a function called `buildPrompt` that takes a role (string), an array of examples (each with 'input' and 'output' properties), a task (string), and a format (string), and returns a formatted prompt string. The prompt should have sections: the role on line 1, then each example formatted as 'Input: {input}\nOutput: {output}', then the task as 'Input: {task}\nOutput:', and finally a format instruction.

Loading editor...

Real-World Usage

Prompt engineering is a core skill for building AI-powered products:

  • Customer support bots: System prompts define tone, knowledge boundaries, and escalation rules for automated support agents.
  • Code generation: GitHub Copilot and Cursor use carefully engineered prompts that include file context, language conventions, and coding patterns.
  • Content moderation: Classifiers use few-shot prompts to categorize content as safe, warning, or violation with examples of each category.
  • Data extraction: Structured output prompts turn unstructured documents (emails, PDFs, contracts) into clean JSON for databases.
  • Prompt injection defense: Understanding prompt engineering is essential for defending against adversarial inputs that try to override system instructions.

Connections