15-381 - Fall 2017

Artificial Intelligence:
Representation and Problem Solving

Key Information

Mondays + Wednesdays, 3:00pm - 4:20pm, Room 3048


35% Final, 15% Midterm, 40% Homework, 10% Participation

(optional) S. Russel and P. Norvig, Artificial Intelligence: A Modern Approach (3rd edition, 2010)


This course is about the foundations of theory and practice of Artificial Intelligence. But, what AI means/is?
A number of different definitions of AI have been given so far. In general, all include the underlying idea of equipping machines (e.g., computer programs, robots) with "tools" to display behaviors that we, as humans, would be inclined to categorize as "intelligent". This is still quite vague, since it requires to clarify in turn what an intelligent behavior is.

In this course, this is issue is somehow narrowed down to the concept of rationality : the notion of Artificial Intelligence is treated as equivalent to that of Computational Rationality. Given well defined preferences, a rational agent always selects the actions whose outcomes result in the maximization of the expected utility.

The course will provide an overview of modern techniques for rational decision-making under different conditions regarding available information, certainty or uncertainty of the information, presence of one or more agents in both cooperative and adversarial scenarios.

More precisely, during the course, fundamental questions about the design of AI systems will be addressed, such as:

  • how to represent knowledge,

  • how to effectively generate provably optimal sequences of actions, (i.e., how to make optimal decision plans or define decision policies);

  • how to search among (large sets of) alternatives to find optimal or near-optimal solutions;

  • how to learn to make optimal decisions.

Both deterministic and stochastic environments will be considered and treated separately.

Learning techniques will be introduced for dealing with supervised classification and regression tasks, as well as with online reinforcement-based action learning.

Decision-making in multi-agent systems will be also investigated, considering: two (or more) agents with conflicting goals (known as games), large numbers of relatively simple and self-organizing agents (swarm intelligence).


There are three primary objectives for the course:

  • To provide a broad survey of AI and of the most popular techniques that are employed for: knowledge representation, problem solving, mathematical optimization, automated planning, multi-step decision-making, supervised and reinforcement learning, adversarial search (games), swarm systems.

  • To develop a thorough understanding of the algorithmic foundations of AI and acquire a strong appreciation of the big-picture aspects of designing fully autonomous intelligent agents.

  • To develop operational known-how about how to build/program AI agents, and analyze and improve their performance.

Course Layout

The course will be based on lectures. Students are expected to attend all classes and to actively participate with questions. The instructor will always be available at the end of each lecture for further clarifications and explanations.

For each one of the different topics, the course will present relevant techniques, discuss formal results, and show the application to problems of practical and theoretical interest.

Homework will be assigned (approximately) by-weekly, for a total of 9 assignments, that will include both questions to be answered and programming assignments. Written questions will involve working through different algorithms, deriving and proving mathematical results, and critically analyzing the material presented in class. Programming assignments will mainly involve writing code in Python to implement and test algorithms.


The only strict pre-requisite is 15-122 (Principles of imperative programming). However, since the course will cover a number of different topics, students should have previous programming experience (programming assignments will be given in Python), as well as a solid general CS background, calculus, and basics of probability theory. Please talk to the instructor if you are unsure whether your background is suitable or not for the course.


Grading will assigned based on the following weighting: 40% Homework, 35% Final exam, 15% Midterm exam, 10% Participation. The final exam will include questions about all topics considered in the course, with an emphasis on the topics introduced after the midterm exam.

10% of the final grading will be based on attendance and participation. However, if a student only attends between 65% and 75% of the classes, his/her final grade will not be higher than B. For any attendance below 65%, the final grade will not be higher than C.

Of course, these restrictions will not apply in the case of major, certified, impediments.


There is no required textbook for this class. The students should be able to learn everything from lecture handouts and homework. However, the Russel and Norvig textbook will cover good parts of the course. During the course, other books will be pointed out by the instructor to cover specific parts of the course.


Dates Topic Slides References
8/21 Introduction, Definitions, Road map pdf R&N Ch. 1,2,26,27, Brief AI history, 2016 Stanford AI report
(Model-based) Automated Planning: Full information, Deterministic, Goal-based
8/23 Search problems, Uniformed search: BFS, DFS, UCS, IDS pdf R&N Ch. 3
8/28 Uninformed Search (continued) + Informed Search pdf R&N Ch. 3
8/30 A* properties, Theta*, path planning pdf R&N Ch. 3, Theta* paper at AAAI-07
9/4 Eid al-Adha break
9/6 Eid al-Adha break
9/11 STRIPS planning 1: Factored states, description language, heuristics pdf R&N Ch. 10
9/13 STRIPS planning 2: Planning graph, Relaxed plans, heuristics pdf R&N Ch. 10
Constraint Satisfaction and Optimization models and methods
9/18 STRIPS planning 2a: GRAPHPLAN, backward search
Search in solution sets, Definition of Constraint Satisfaction Problems (CSPs)
pdf (planning)
pdf (CSP)
R&N Ch. 10
R&N Ch. 6
9/20 Solution of Constraint Satisfaction Problems:
Backtracking, Arc consistency
pdf R&N Ch. 6
9/25 Optimization 1: Convex optimization problems pdf
9/27 Optimization 2: Integer Programming models, Branch-and-bound methods pdf
10/2 Optmization 3: IP solving, Local search algorithms pdf (IP solving)
pdf (Local search)
10/4 Optimization 4: Local search, Iterated local search, Modeling with Markov chains pdf
(Model-Based) Representation, Prediction, and Decision-Making under Uncertainty
10/9 Markov processes, Simulated Annealing pdf
10/11 Markov Chains 1, prediction and analysis of random processes pdf
10/16 Midterm Exam
10/18 Markov Chains 2, MC Monte Carlo for sampling according to a probability distribution pdf
10/23 Markov Chains 3, Classification of states and chains pdf S. Ross, Introduction to probability models, Chapter 4
10/25 Egodic chains, From MCs to MDPs pdf
10/30 Markov Decision Processes 1: Models, Utilities, Bellman equations pdf Musam and Kolobov, Planning with MDPs, Chapter 2
11/1 Markov Decision Processes 2: Value Iteration and variants, Policy Iteration pdf (slides 39-50 will be explained next lecture) Musam and Kolobov, Planning with MDPs, Chapter 3
Learning how to make decisions
11/6 Reinforcement Learning 1: Generalities, Taxonomy of Machine Learning approaches RL I pdf
(MDP II pdf) (slides 39-50)
11/8 Reinforcement Learning 2: Prediction / Policy evaluation with Monte Carlo and Temporal Differences (TD) pdf Sutton and Barto RL book, Nov'17 Draft
11/13 Reinforcement Learning 3: Control with SARSA and Q-Learning pdf
11/15 Supervised Learning 1: Classification and Regression pdf
11/20 Supervised Learning 2: Classification and Regression pdf
Decision-making in multi-agent adversarial scenarios, imperfect information
11/22 Game Theory 1: Concepts, Nash equilibria pdf
11/27 Game Theory 2: Mixed strategies, Correlated equilibria pdf
11/29 Summary + Q&A
12/4 Final Exam (8:30 - 11:30, Room 2049)

Homework Assignments

Topic Files Due Dates
Homework 1: Rationality, Uninformed and Informed Search hw1.pdf, Viz.zip, GridWorlds.zip Sep 13, 2017
Homework 2: STRIPS planning, CSPs hw2.pdf, Graphplan.zip Oct 5, 2017
Homework 3: Modeling and solving optimization problems hw3.pdf, instances.zip Oct 14, 2017
Homework 4: Application and analysis of Markov chains hw4.pdf, hollins.dat November 6, 2017
Homework 5: Markov decision processes, Reinforcement learning hw5.pdf November 26, 2017
Homework 6: Supervised Learning, Game Theory hw6.pdf, poverty.dat November 30, 2017

Homework Policies

  • Homework is due on autolab by the posted deadline. Assignments submitted past the deadline will incur the use of late days.

  • You have 6 late days, but cannot use more than 2 late days per homework. No credit will be given for homework submitted more than 2 days after the due date. After your 6 late days have been used you will receive 20% off for each additional day late.

  • You can discuss the exercises with your classmates, but you should write up your own solutions. If you find a solution in any source other than the material provided on the course website or the textbook, you must mention the source. You can work on the programming questions in pairs, but theoretical questions are always submitted individually. Make sure that you include a README file with your andrew id and your collaborator's andrew id.

  • In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed.

Exam dates and policies

The class includes both a midterm and a final exam. The midterm is set for October 16 and Final is on December 4.

During exams students are allowed to consult 1-page cheatsheet (written in any desired format). No other material is allowed, including textbooks, computers/smartphones, or copies of lecture handouts.

Office Hours

Name Email Hours Location
Gianni Di Caro gdicaro@cmu.edu Thursdays 2:30-3:30pm + Drop in at my office any time ... M 1007