AI Engineer
From programming fundamentals through mathematics to machine learning, deep learning, and LLM engineering.
Programming Fundamentals
Core programming skills needed for AI/ML work.
Understanding how programs store and manage data
How programs make decisions and repeat actions
Organizing code into reusable, composable units
Where variables live and how they're accessed
Ordered collections of data and how to work with them
Key-value data structures for modeling real-world entities
Advanced patterns for processing collections efficiently
Blueprints for creating objects with shared behavior
Data Structures
Key data structures used in ML pipelines and algorithms.
Array operations, two-pointer techniques, and sliding window patterns
Fast key-value lookups using hash functions
Hierarchical data structures with parent-child relationships
Networks of connected nodes for modeling relationships
How to measure and compare algorithm efficiency using Big O notation
Understanding memory usage of algorithms and data structures
Mathematics for ML
The minimum math needed to understand ML — vectors, matrices, probability, and optimization.
Vectors as arrays of numbers, matrices as 2D arrays, and the operations that power graphics and ML
Measuring similarity and magnitude with dot products, norms, and cosine similarity
Probability basics, conditional probability, and Bayes' theorem for reasoning under uncertainty
Expected value, variance, and standard deviation for understanding data distributions
Logarithms as the inverse of exponentiation, and why log base 2 is everywhere in CS
Rates of change, slopes, and finding minima — the math behind ML training
The algorithm that trains neural networks — stepping downhill to minimize loss
Machine Learning
Supervised/unsupervised learning, loss functions, and model evaluation.
Learning from labeled examples to make predictions on new data
Finding hidden patterns and structure in data without labeled examples
Splitting data to honestly evaluate how well a model generalizes
Quantifying how wrong a model's predictions are so it can learn to be right
How optimization algorithms train machine learning models by iteratively reducing loss
Understanding why models memorize training data and how to prevent it
Measuring model performance with the right metrics for the right task
Transforming raw data into meaningful inputs that help models learn effectively
Deep Learning & LLMs
Neural networks, transformers, embeddings, and modern LLM engineering.
From a single neuron to universal function approximation
Representing words, images, and ideas as dense vectors in continuous space
The mechanism that lets models focus on what matters in a sequence
The repeating building block that powers every modern language model
How LLMs break text into pieces and why they have a memory limit
Getting the best output from LLMs through structured, intentional input
Giving LLMs access to external knowledge through retrieval-augmented generation
Choosing between adapting model behavior through prompts or through training