15-381 - Fall 2018

Artificial Intelligence:
Representation and Problem Solving

Key Information

Sundays + Tuesdays, 10:30am - 11:50am, Room 3204

Dr. Eduardo Feo-Flushing


35% Final, 15% Midterm, 50% Homework

(optional) S. Russel and P. Norvig, Artificial Intelligence: A Modern Approach (3rd edition, 2010)


This course is about the foundations of theory and practice of Artificial Intelligence. But, what AI means/is?
A number of different definitions of AI have been given so far. In general, all include the underlying idea of equipping machines (e.g., computer programs, robots) with "tools" to display behaviors that we, as humans, would be inclined to categorize as "intelligent". This is still quite vague, since it requires to clarify in turn what an intelligent behavior is.

In this course, this is issue is somehow narrowed down to the concept of rationality : the notion of Artificial Intelligence is treated as equivalent to that of Computational Rationality. Given well defined preferences, a rational agent always selects the actions whose outcomes result in the maximization of the expected utility.

The course will provide an overview of modern techniques for rational decision-making under different conditions regarding available information, certainty or uncertainty of the information, presence of one or more agents in both cooperative and adversarial scenarios.

More precisely, during the course, fundamental questions about the design of AI systems will be addressed, such as:

  • how to represent knowledge,

  • how to effectively generate provably optimal sequences of actions, (i.e., how to make optimal decision plans or define decision policies);

  • how to search among (large sets of) alternatives to find optimal or near-optimal solutions;

  • how to learn to make optimal decisions.

Both deterministic and stochastic environments will be considered and treated separately.

Machine learning techniques will be introduced for dealing with supervised classification and regression tasks. Specialized deep learning techniques will be also considered, for image classification tasks. Reinforcement learning will be studied for sequential, interactive decision-making in unknown environments. Decision-making in multi-agent systems will be also investigated, considering multiple agents with conflicting goals (known as games) under different conditions and available information.


There are three primary objectives for the course:

  • To provide a broad survey of AI and of the most popular techniques that are employed for: knowledge representation, problem solving, mathematical optimization, automated planning, probabilistic prediction and inference, sequential decision-making, supervised and reinforcement learning, deep learning, decision-making in multi-agent adversarial scenarios.

  • To develop a thorough understanding of the algorithmic foundations of AI and acquire a strong appreciation of the big-picture aspects of designing fully autonomous intelligent agents.

  • To develop operational known-how about how to build/program AI agents, and analyze and improve their performance.

Course Layout

The course will be based on lectures. Students are expected to attend all classes and to actively participate with questions. The instructor will always be available at the end of each lecture for further clarifications and explanations.

For each one of the different topics, the course will present relevant techniques, discuss formal results, and show the application to problems of practical and theoretical interest.

Homework will be assigned (approximately) by-weekly, for a total of 10 assignments, that will include both questions to be answered and programming assignments. Written questions will involve working through different algorithms, deriving and proving mathematical results, and critically analyzing the material presented in class. Programming assignments will mainly involve writing code in Python to implement and test algorithms. In some cases, the Anki Cozmo robot (shown in the figure below) will be used for testing the algorithms in physical real-world scenarios.


The only strict pre-requisite is 15-122 (Principles of imperative programming). However, since the course will cover a number of different topics, students should have previous programming experience (programming assignments will be given in Python), as well as a solid general CS background, calculus, and basics of probability theory. Please talk to the instructor if you are unsure whether your background is suitable or not for the course.


Grading will assigned based on the following weighting: 50% Homework, 35% Final exam, 15% Midterm exam. The final exam will include questions about all topics considered in the course, with an emphasis on the topics introduced after the midterm exam.


There is no required textbook for this class. The students should be able to learn everything from lecture handouts and homework. However, the Russel and Norvig textbook will cover good parts of the course. During the course, other books will be pointed out by the instructor to cover specific parts of the course.


Dates Topic Slides References
9/2 Introduction: Definitions, Problems, Road map pdf R&N Ch. 1,2,26,27, Brief AI history, 2016 Stanford AI report
(Model-based) Automated Planning: Full information, Deterministic, Goal-based
9/4 Planning and Search problems; Uniformed search: BFS, UCS, DFS, DL-DFS, IDS pdf R&N Ch. 3
9/9 Informed Search: A*, Properties and applications pdf R&N Ch. 3
9/11 Application of informed search: Path planning pdf Theta* paper at AAAI-07
9/16 Classical planning 1: Factored states, Description language, Forward and Backward search pdf R&N Ch. 10
9/18 Classical planning 2:Planning graphs, GRAPHPLAN, Relaxed plans, Domain-independent heuristics pdf R&N Ch. 10
9/23 Constraint Satisfaction Problems (CSPs): Definitions and Taxonomy, Backtracking, Arc consistency, Constraint propagation, Pruning pdf R&N Ch. 6
Optimization models and methods
9/25 CSP: Inference-propagation methods (included in the slides of the previous lecture)
Optimization 1: Optimization approaches for CSP, Local search, Hill climbing, WalkSAT
pdf R&N Ch. 4, 6
Paper on WalkSAT (Local search for max-SAT) in DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 26, 1996
9/30 Optimization 2: General classes of optimization problems, Convex problems, Properties of convex optimization pdf
10/2 Optimization 3: Solving optimization problems, Lagrangian multipliers, Gradient-based methods pdf Survey on gradient descent algorithms
10/7 Optimization 4: Integer programming, modeling and solution approaches pdf Branch-and-Bound, book chapter
Probabilistic models for knowledge representation, inference, predictions
10/9 Bayesian Networks 1: Probabilistic models, Joint probability distributions, BN Construction, Global and local semantics, Information propagation pdf R&N Ch. 13, 14
10/14 Bayesian Networks 2: Exact inference, Sampling-based methods for inference pdf R&N Ch. 13, 14
10/16 Markov Processes 1: Random processes, Markov property, Markov Chains pdf R&N Ch. 15
S. Ross, Introduction to probability models, Ch. 4
10/21 Flooding! Class is canceled
10/23 Markov Processes 2: Limiting and Stationary distributions in Markov Chains, State classification, Taxonomy of Markov Processes, Rewards and Policies pdf
10/28 Fall break
10/30 Fall break
11/22 Recitation: Course summary, Bayesian networks, optimization, planning, modeling
11/4 Midterm Exam
(Model-Based) Automated Planning: Uncertain actions, Policies, Rewards
11/6 Markov Decision Processes 1: Utilities, Bellman equations, Policy evaluation pdf R&N Ch. 16, 17
11/11 Markov Decision Processes 2: Value Iteration, Measures of convergence of Value Iteration, Variants of Value Iteration, Policy Iteration pdf R&N Ch. 17
Book extract on variants of Value Iteration
(Model-free) From direct experience to decision policy learning
11/13 Reinforcement Learning 1: Models, challenges, Monte Carlo, Temporal Differences pdf R&N Ch. 21
Sutton and Barto RL book, May'18 Draft
11/18 Reinforcement Learning 2: Active RL, Exploration vs. exploitation, Monte Carlo action learning, SARSA, Q-Learning pdf Diagram summarizing the topics investigated in the RL lectures
Decision-making in multi-agent adversarial scenarios, imperfect information
11/20 Game Theory 1: General Concepts, examples, dominance pdf R&N Sec. 17.5-17.6
Book on Multi-agents systems, Ch. 3-4

Note: this was a reduced lecture due to revision and completion of RL (slides and summary diagram are included in Lecture 11/18)
11/22 Recitation: Course summary + MDPs Bellman-Ford algorithm, deterministic dynamic programming, asynchronous, distributed Diagram summarizing course topics + MDPs
11/25 Game Theory 2: Nash equilibrium, mixed strategies, computing Nash equilibria, game formalization and analysis using algebra and calculus pdf
11/27 Game Theory 3: Correlated equilibrium, Stackelberg games, security games pdf
Supervised learning for Classification and Regression problems
12/2 Supervised Learning 1: ML paradigms, canonical SL problem, classification and regression, hypothesis space, loss function, optimization, gradient-based approaches pdf (v2) R&N 18.1, 18.2, 18.4
12/4 Supervised Learning 2: Discriminative vs. generative models, statistical model for linear regression, modeling non-linearities, feature functions, maximum likelihood, overfitting, model selection and validation pdf
12/6 Supervised Learning 3: Regularization in regression and classification, MAP vs. ML solutions, discriminant functions, perceptron rule, loss functions in linear classification, SVMs, logistic regression, model selection techniques pdf Supplementary material on multivariate Gaussian distributions
Summary, Excercises, Q&A
12/9 Recitation
12/11 Recitation
12/15 Final Exam (08:30 - 11:30, Room 2049)

Homework Assignments

Topic Files Due Dates
Homework 1: Rationality, Representation, Uninformed and Informed Search hw1.pdf, Viz.zip, GridWorlds.zip Theta* paper at AAAI-07 Sep 18, 2018
Homework 2: Classical planning hw2.pdf, Intro to PDDL, Wumpus world in PDDL, Code libraries, Testbed files Software documentation Oct 1, 2018
Homework 3: Constraint Satisfaction and Optimization models and methods hw3.pdf, Test instances and example code in PuLP Oct 14, 2018
Homework 4: Probabilistic modeling and Bayesian networks hw4.pdf, C program for instance parsing and direct sampling Oct 25, 2018
Homework 5: Markov chains, modeling and analysis hw5.pdf, Data file for PageRank Nov 11, 2018
Homework 6: Markov decision processes for sequential-decision making hw6.pdf Nov 26, 2018
Homework 7: Reinforcement learning, Game theory hw7.pdf, Paper on security games for border control Dec 8, 2018
(Optional) Mini Homework 8: Supervised Learning for Classification and Regression hw8.pdf, Data file for regression Dec 11, 2018

Homework Policies

  • Homework is due on Autolab by the posted deadline. Assignments submitted past the deadline will incur the use of late days.

  • You have 6 late days, but cannot use more than 2 late days per homework. No credit will be given for homework submitted more than 2 days after the due date. After your 6 late days have been used you will receive 20% off for each additional day late.

  • You can discuss the exercises with your classmates, but you should write up your own solutions. If you find a solution in any source other than the material provided on the course website or the textbook, you must mention the source. You can work on the programming questions in pairs, but theoretical questions are always submitted individually. Make sure that you include a README file with your andrew id and your collaborator's andrew id.

  • In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed. Students are required to refer to CMU's general policies on cheating and plagiarism: https://www.cmu.edu/policies/student-and-student-life/academic-integrity.html

Exam dates and policies

The class includes both a midterm and a final exam. The midterm is set for November 4 and the final for December 15.

During exams students are allowed to consult 1-page cheatsheet (written in any desired format). No other material is allowed, including textbooks, computers/smartphones, or copies of lecture handouts.

Students with special needs or disabilities are kindly asked to talk about it to the teacher at the beginning of the semester and make use of the assistance procedures provided by CMU https://scotty.qatar.cmu.edu/qword/student-affairs/office-of-health-and-wellness/assistance-for-individuals-with-disabilities

Office Hours

Name Email Hours Location
Gianni Di Caro gdicaro@cmu.edu Thursdays 1:30-2:30pm + Drop in at my office any time ... M 1007
Eduardo Feo-Flushing efeoflus@andrew.cmu.edu Sunday and Tuesday, 2pm-3pm M 1009