This course is about the foundations of theory and practice of
Artificial Intelligence. But, what AI means/is?
A number of different definitions of AI have been given so far. In general, all
include the underlying idea of equipping machines (e.g., computer programs,
robots) with "tools" to display behaviors that we, as humans, would be inclined
to categorize as "intelligent". This is still quite vague, since it
requires to clarify in turn what an intelligent behavior is.
In this course, this is issue is somehow narrowed down to the concept of rationality : the notion of Artificial Intelligence is treated as equivalent to that of Computational Rationality. Given well defined preferences, a rational agent always selects the actions whose outcomes result in the maximization of the expected utility.
The course will provide an overview of modern techniques for rational decision-making under different conditions regarding available information, certainty or uncertainty of the information, presence of one or more agents in both cooperative and adversarial scenarios.
More precisely, during the course, fundamental questions about the design of AI systems will be addressed, such as:
how to represent knowledge,
how to effectively generate provably optimal sequences of actions, (i.e., how to make optimal decision plans or define decision policies);
how to search among (large sets of) alternatives to find optimal or near-optimal solutions;
how to deal with uncertainity;
how to take decisions in adversarial environments;
how to learn to make optimal decisions.
Both deterministic and stochastic environments will be considered and treated separately.
There are three primary objectives for the course:
To provide a broad survey of AI and of the most popular techniques that are employed for: knowledge representation, problem solving, mathematical optimization, automated planning, probabilistic prediction and inference, sequential decision-making, supervised and reinforcement learning, deep learning, decision-making in multi-agent adversarial scenarios.
To develop a thorough understanding of the algorithmic foundations of AI and acquire a strong appreciation of the big-picture aspects of designing fully autonomous intelligent agents.
To develop operational known-how about how to build/program AI agents, and analyze and improve their performance.
Students are expected to attend all classes and to actively participate with questions.
Each theory lecture will feature short Quizzes at the beginning and at the end of the lecture.
For each one of the different topics, the course will present relevant techniques, discuss formal results, and show the application to problems of practical and theoretical interest.
Weekly recitations will be used to practice together with the algorithms and their application to different problems.
Homework will be assigned weekly. Each homework will include both questions to be answered and programming assignments.
Written questions will involve working through different algorithms, deriving and proving mathematical results, and critically analyzing the material presented in class.
Programming assignments will involve writing code in Python to implement and test algorithms.
Since the course will cover a number of different topics, students should have some background in programming (in Pyhton), algorithms, calculus, linear algebra, and probability theory. Please talk to the instructor if you are unsure whether your background is suitable for the course.
The formal prequisites for this course are:
The corequisite for this course is:
For this corequisite, you should either have completed it prior to starting 15-281 or have it on your schedule for Spring 2023.
Grading will assigned based on the following weighting:
The final exam will
include questions about all topics considered in the course, with an emphasis on the
topics introduced after the first midterm exam.
There is no required textbook for this class. The students should be able to learn
everything from lecture handouts and homework. However, the Russel and Norvig
textbook (available on Canvas) will cover good parts of the course.
Another very valuable resource, accessible online, is the book
Artificial Intelligence:
Foundations of Computational Agents, 3rd Edition David L. Poole and Alan
K. Mackworth.
During the course, other books will be pointed out by the instructor to cover
specific parts of the course.
Dates | Topic | Slides | References | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
8/27 | AI Problems and agents, Search problems: Rational agents; PEAS; task environments; taxonomy of agent types; characyerization of search and planning problems; states and state graphs. | R&N Ch. 3 | |||||||||
|
|||||||||||
9/1 | Uninformed Search: Search meta-algorithms: Tree and Graph search; performance metrics; Uninformed vs. Informed search ; BFS, DFS, DL-DFS, IDS, UCS: algorithms and properties | R&N Ch. 3 | |||||||||
9/3 | Recitation: Search algorithms | PDF Solutions |
|||||||||
|
|||||||||||
9/10 | Adversarial search 2: More running examples of alpha-beta pruning; search under uncertainty; recap on probabilities; expectimax search; properties; probabilistic models, role and impact; other types of games. | R&N Ch. 5 | |||||||||
9/10 | Recitation: Adversarial search | PDF Solutions |
|||||||||
|
|||||||||||
9/17 | Classical planning 1: Factored states; STRIPS/PDDL description language; states as conjuction of propositions; structure of actions; goals and sub-goals; planning problem: domain, instance, planner; complexity of planning; forward search for planning; quest for heuristics; ignore preconditions and domain-independent heuristics. | R&N Ch. 10 Propositional Logic for Knowledge Representation Formal intro to Propositional and First-Order Logic |
|||||||||
9/17 | Classical planning 2: Domain-Independent heuristics, issues using ignore preconditions; backward search: regression on actions, action selection, state sets; planning graphs for domain-independent heuristics and solution finding: construction, mutex links, properties, use for domain-independent heuristics; GRAPHPLAN algorithm: extract solution and backward search, properties, solutions. | R&N Ch. 10 | |||||||||
|
|||||||||||
9/24 | Constraint Satisfaction Problems (CSPs) 2, Local search: Arc
consistency; constraint propagation (inference, message passing); pruning by
inference-propagation; AC-3; Forward checking; Maintaining Arc Consistency
(MAC); complexity and properties of pruning methods; Local search and local
search methods for CSP; hill-climbing; WalkSAT |
R&N Ch. 4, 6 Paper on WalkSAT (Local search for max-SAT) in DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 26, 1996 |
|||||||||
9/24 | Recitation: CSPs |
PDF Solutions |
|||||||||
|
|||||||||||
10/1 | Optimization 2, Solution methods for LP and Convex problems: Simplex method for linear programming: concepts, base solutions, polytope vertices, steps of operations; use of derivatives and gradients for finding stationary points; properties of gradient vectors; iterative numeric approach for unconstrained optimization; gradient descent / ascent; effect of step size; batch and stochastic/incremental gradient descent; projection operator for gradient descent in constrained optimization. | Survey on gradient descent algorithms | |||||||||
10/1 | Recitation: Optimization models and methods, LP, Convex problems |
PDF Solutions |
Notebook on Gradient Methods |
||||||||
|
|||||||||||
10/8 | Recitation: Review of concepts so far | ||||||||||
10/8 | Recitation: Review of concepts so far | Practice problems (covering up to CSP) Solutions |
|||||||||
10/10 | Midterm Exam 1 | ||||||||||
|
|||||||||||
10/13 | Fall break | ||||||||||
10/15 | Fall break | ||||||||||
10/15 | Fall break | ||||||||||
|
|||||||||||
10/22 | Bayesian Networks 2: Exact inference by enumeration; complexity of the exact inference; expression trees for optimized computations; approximate inference methods based on sampling / Monte Carlo approaches; direct sampling from Bayesian network; sampling in presence of evidence; rejection sampling; likelihood weighting; limitations of incremental sample construction methods and state-based approaches; Gibbs sampling and Markov Chain Monte Carlo; sampling from a Markov blanket; notion of stationary distribution for state probabilities; Gibbs sampling at work (with exact inference). | R&N Ch. 13, 14 | |||||||||
10/22 | Recitation: Bayesian networks |
PDF Solutions |
|||||||||
|
|||||||||||
10/29 | Markov Chains 2: Stationary distribution and eigenvectors, fixed-point equation; different cases for limiting and stationary distribution;: limiting and invariant distribution; limiting but not invariant, no limiting but possibly stationary distribution; state classes: absorbing, periodic, aperiodic, transient, peristent, recurrent, ergodic; communication classes; irreducible chains; regular chains; ergodic chains and their properties. | R&N 15.2 - 15.6 | |||||||||
10/29 | Hidden Markov Models (HMM): Partial state observability; sensor model;
reasoning over sequences and HMM formalism; probabilistic model as a Bayesian
network; HMM for inference tasks: filtering and prediction; equations for belief
updates under system's dynamics and following an observation; algorithms for
individual and joint belief updates. Recitation: Markov chains, HMMs |
Lecture PDF Solutions |
|||||||||
|
|||||||||||
11/5 | Markov Decision Processes 2: Bellman optimality equations; Value Iteration; measures of convergence of Value Iteration; Policy Iteration | Bellman-Ford algorithm, deterministic dynamic programming, asynchronous, distributed | |||||||||
11/5 | Recitation: MDPs Markov Decision Processes 3: LP formulation, converge properties, different forms of implementing Value Iteration: asynchronous, in-place (Gauss-Siedel), prioritized, prioritized sweeping, partitioned |
PDF Solutions |
MDPs 3 Book extract on variants of Value Iteration |
||||||||
|
|||||||||||
11/12 | Reinforcement Learning 2: Active RL; exploration vs. exploitation; GLIE; Monte Carlo action learning: exploring starts, generalized policy iteration, soft policies; on-policy and off-policy learning; SARSA and Q-Learning, algorithms, properties, working examples; how to explore, exploration functions. | ||||||||||
11/12 | Recitation: Reinforcement Learning |
PDF Solutions |
|||||||||
|
|||||||||||
11/19 | Midterm Exam 2 | ||||||||||
11/19 | Reinforcement Learning 3: Dealing with large state-action spaces; approximation methods; value and Q function approximation; parametric function approximation and regression tasks; linear approximation, neural network approximators; TD and Monte Carlo approximation of V and Q functions; incremental, online stochastic gradient minimization of approximation error; sample target as surrogate of labeled (supervised) target; formulas for incremental updates in approximate MC, TD (Sarsa, Q-Learning). | ||||||||||
|
|||||||||||
11/26 | Game Theory 2: Mixed strategies; Mixed strategies Nash equilibria: definition, existence, computation; game formalization and analysis using algebra and calculus; Nash equilibria as optima and saddle points of payoff function. | ||||||||||
11/26 | Recitation: Game Theory |
PDF Solutions |
|||||||||
|
|||||||||||
12/3 | Recitation: Game Theory, Review of course concepts | ||||||||||
12/3 | Recitation Review of course concepts | ||||||||||
|
Topic | Due Dates | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
HW 2: Informed Search | 9/8 (Theory), 9/11 (Programming) | ||||||||||
HW 3: Adversarial Search | 9/15 (Theory), 9/18 (Programming) | ||||||||||
HW 4: Classical planning | 9/22 (Theory), 9/25 (Programming) | ||||||||||
HW 5: Constraint Satisfaction Problems | 9/29 (Theory), 10/2 (Programming) | ||||||||||
HW 6: Modeling and solving optimization problems | 10/6 (Theory), 10/10 (Programming) | ||||||||||
HW 7: Bayesian networks | 10/27 (Theory), 10/30 (Programming) | ||||||||||
HW 8: Markov chains, Hidden Markov Models | 11/3 (Theory), 11/6 (Programming) | ||||||||||
HW 9: Markov Decision Processes for sequential-decision making | 11/10 (Theory), 11/13 (Programming) | ||||||||||
HW 10: Reinforcement learning with tabular methods | 11/24 (Theory), 11/28 (Programming) | ||||||||||
HW 11: Reinforcement learning with deep learning for function approximation | 12/1 (Programming) | ||||||||||
HW 12: Game Theory for multi-agent adversarial scenarios | 12/5 Theory + Programming | ||||||||||
|
Homework is due by the posted deadline. Assignments submitted past the deadline will incur the use of late days.
You have 3 late days, but cannot use more than 2 late days per homework. No credit will be given for homework submitted more than 2 days after the due date. After your 3 late days have been used you will receive 40% off for each additional day late.
You can discuss the exercises with your classmates, but you should write up your own solutions. If you find a solution in any source other than the material provided on the course website or the textbook, you must mention the source. You can work on the programming questions in pairs, but theoretical questions are always submitted individually. Make sure that you include a README file with your andrew id and your collaborator's andrew id.
In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed. Students are required to refer to CMU's general policies on cheating and plagiarism: https://www.cmu.edu/policies/student-and-student-life/academic-integrity.html
The class includes two midterm exams and one final exam. Midterms are set set for October 8th and November 17th, the final is TBD.
During exams students are allowed to consult 4-page cheatsheet
(written in any desired format). No other material is allowed, including
textbooks, computers/smartphones, or copies of lecture handouts.
Students with special needs or disabilities are kindly asked to talk about it to the teacher at the beginning of the semester and make use of the assistance procedures provided by CMU https://scotty.qatar.cmu.edu/qword/student-affairs/office-of-health-and-wellness/assistance-for-individuals-with-disabilities
Name | Hours | Location | |
---|---|---|---|
Gianni Di Caro | gdicaro@cmu.edu | Mondays 10:00am-11:00am + Send me an email to set up an appointment | M 1007 |
Nour Ali | ntali@andrew.cmu.edu | Sundays & Thursdays: 2:30pm - 3:3pm | ARC |