10-315 - Fall 2021

Introduction to
Machine Learning




Key Information

Sundays, Tuesdays, Thursdays, 11:30am - 12:45pm, Room 1190

12.0

30% Final, 10% Midterm, 40% Homework 20% Multiple-choice Quizzes

(15122) and (21127 or 21128 or 15151) and (21325 or 36217 or 36218 or 36225 or 15359). In general, a solid background in CS, calculus, and probability theory is needed to deal successfully with course challenges


Overview

Machine learning is a subfield of computer science with the goal of exploring, studying, and developing learning systems, methods, and algorithms that can improve their performance with learning from data. The course is designed to give undergraduate students a one-semester-long introduction to the main principles, algorithms, and applications of machine learning.


After completing the course, students will be able to:

  • select and apply an appropriated supervised learning algorithm for classification problems (e.g., naive Bayes, perceptron, support vector machine, logistic regression);

  • select and apply an appropriate supervised learning algorithm for regression problems (e.g., linear regression, ridge regression);
  • recognize different types of unsupervised learning problems, and select and apply appropriate algorithms (e.g., density estimation, clustering, linear and nonlinear dimensionality reduction);

  • work with probabilities (Bayes rule, conditioning, expectations, independence), linear algebra (vector and matrix operations, eigenvectors, SVD), and calculus (gradients, Jacobians) to derive machine learning methods such as linear regression, naive Bayes, and principal components analysis;

  • understand machine learning principles such as model selection, overfitting, and underfitting, and techniques such as cross-validation and regularization;

  • implement machine learning algorithms such as logistic regression via stochastic gradient descent, linear regression, perceptron, SVMs, boosting, k-means clustering;

  • run appropriate supervised and unsupervised learning algorithms on real and synthetic data sets and interpret the results.


The course is organized as follows:

  • The course will be based on lectures, that will be given on Sundays, Tuesdays, , and Thursdays. Whenever possible, Thursdays, will be used as recitation classes to revise and/or expand concepts introduced in the lectures, work out example cases, and cover aspects that may be not in the students' background.
    Students are expected to attend all classes and to actively participate with questions.

  • For each one of the different topics, the course will present relevant techniques, discuss formal results, and show the application to problems of practical and theoretical interest.

  • Homework will include both questions to be answered and programming assignments. Written questions will involve working through different algorithms, deriving and proving mathematical results, and critically analyzing the material presented in class. Programming assignments will mainly involve writing code to implement and test algorithms in relevant scenarios.

  • Quizzes, in the form of multiple-choice questions, will be used to check student progress and to promote revising lecture material with continuity.

Prerequisites

Having successfully passed the following courses is necessary: (15122) and (21127 or 21128 or 15151) and (21325 or 36217 or 36218 or 36225 or 15359).

In general, familiarity with python programming, and a solid background in general CS, calculus, and probability theory are needed to deal successfully with course challenges. Some basic concepts in CS, calculus, and probability will be briefly revised (but not re-explained from scratch).

Talk to the teacher if you are unsure whether your background is suitable or not for the course.

Grading

Course grading will assigned based on the following weighting: 40% Homework, 30% Final exam, 10% Midterm exam, 20% Multiple-choice Quizzes. There will be about five homework assignments. The final exam will include questions about all the topics considered in the course, with an emphasis on the topics introduced after the midterm exam. Quizzes will consist in multiple-choice questions aimed to keep up with the topics of the course in-between homeworks.

Note that grades will be NOT curved. The mapping between scores and letter grades will roughly follow the scheme below. However, final course scores will be converted to letter grades based on grade boundaries that will be precisely determined at the end of the semester accounting for a number of aspects, such as participation in lecture and recitation, exam performance, and overall grade trends. Note that precise grade cutoffs will not be discussed at any point during or after the semester.

  • A: score ≥ 90%
  • B: 80% ≤ score < 90%
  • C: 70% ≤ score < 80%
  • D: 60% ≤ score < 70%


Textbooks

In addition to the lecture handouts (that will be made available after each lecture), during the course additional material will be provided by the instructor to cover specific parts of the course.

A number of (optional) textbooks can be consulted to ease the understanding of the different topics (the relevant chapters will be pointed out by the teacher):

  • Machine Learning, Tom Mitchell (in the library)
  • Pattern Recognition and Machine Learning, Christopher Bishop, available online
  • Machine Learning: A Probabilistic Perspective, Kevin P. Murphy, available online
  • A Course in Machine Learning, Hal Daume', available online
  • Pattern Classification, Richard Duda, Peter Hart, David Stork, 2nd ed., partially online (in the library)
  • Deep Learning, Ian Goodfellow, Yoshua Bengio, Aaron Courville, available online
  • Kernel Methods for Pattern Analysis, John Shaw-Taylor, Nello Cristianini, available online

Schedule



Dates Topics Slides Useful References
HW
8/22 Introduction, General overview of ML: Basic concepts; taxonomy of learning problems; workflow of ML approaches; interpretation views of ML problems; course road map; logistics; practical recommendations. pdf
8/24 ML design, SL workflow, Loss functions: Workflow of a supervised learning problem scenario; structure and challenges of a typical SL problem: features, hypothesis class, loss function, optimization; design choices and inductive biases; loss function to score the effectiveness of learning; loss functions for classification and regression; generalization. pdf
  • Murphy Chapter 1.4
  • Bishop Chapter 1
8/26 Empirical risk minimization, Overfitting, Generalization error: Review of concepts; Model selection and overfitting; empirical vs. generalization errors; estimation of generalization error and model selection; different ways of estimating generalization error.

Recitation: Review of basic concepts: calculus, linear algebra, probability theory.
pdf QNA1 out

8/29 Estimating generalization error, Canonical SL problem, Instance-based methods, k-Nearest Neighbors classifier: Recap on generalization error; estimating generalization error; minimization of empirical risk; use of training and validation sets; concepts of model selection; instance-based and non-parametric methods; basic concepts of k-NN; k-NN classifier vs. Optimal classifier; decision regions and decision boudaries; 1-NN decision boundary and Voronoi tassellation of feature space; small vs. large K; k-NN regression; non-parametric vs. parametric methods. pdf
  • Murphy Chapter 1.4.2
  • Daume' Chapter 3
  • Mitchell Chapter 7
8/31 Decision Trees I, SL based on the Divide-and-Conquer Model: Learning by asking questions; structure of decision trees; expressivness of DTs; hypothesis space and NP-hardness for finding simplest consistent hypothesis; recursive dataset decomposition, divide-and-conquer; axes-parallel decision boundaries; greedy top-down heuristics. pdf
  • Mitchell Chapters 1, 2, 6.1 - 6.3
  • Bishop Chapters 1, 2
9/2 Decision trees II, Model selection and Cross-Validation: Greedy top-down heuristics for decision trees: ID3, C4.5; entropy and information gain; purity of a labeled set; maximal gain for choosing the next attribute; discrete vs. continous features; overfitting issues and countermeasures decision tree regression. Estimating generalization error and model selection; hold-out method; cross-validation methods: k-fold CV, leave-one-out CV, random subsampling; design issues in CV; model selection; model selection using CV. PDF of lecture given in class QNA1 due (Sat 9/4), HW1 out (Fri 9/3)

9/5 Bayes Optimal Classifier, Decision Boundaries: probabilistic view, elements of decision theory, first review of probability concepts, Bayes rule, decision boundaries and classification errors, generative and discriminative modeling, introduction to probability distribution estimation pdf
9/7 Estimating Probabilities 1: Probability estimation and Bayes classifiers; importance and challenges estimating probabilities from data; frequentist vs. Bayesian approach to modeling probabilities; definition and properties of MLE, MAP, and Full Bayes approaches for parameter estimation; priors and conjugate probability distributions; examples with Bernoulli data and Beta priors. pdf

Check your knowledge
9/9 Estimating Probabilities 2: Review of concepts, overview of conjugate priors for continous and discrete distributions; practical examples; Bernoulli-Beta, Binomial-Beta, Multinomial-Dirichlet, Gaussian-Gaussian; MLE vs. MAP vs. Full Bayes, pros and cons. pdf
  • Bishop Chapters 2, 4.2, 4.3
  • Murphy Chapters 3, 5
  • Duda, Hart, Stork, Chapter 3.3, 3.4, 3.5

9/12 Break, no classes
9/14 Linear algebra and Multivariate Gaussians: Review of linear algebra; matrix-vector notation; quadratic forms; multivariate Gaussians; isocontours; notions related to covariance matrices; making predictions using estimated probabilities and MLE/MAP/Bayes. pdf
9/16 Prediction and Classification using Estimated Probabilities, Naive Bayes Classifier: Classification using estimated probabilities and MLE/MAP/Bayes; quadratic and linear decision boundaries using Gaussians; complexity challenges and feature dependencies; Naive Bayes models; discrete and continous features, MLE and MAP approaches; simplification rules for MAP estimation using smoothing parameters; case study for discrete features: text classification; case study for continous features: image classification. pdf HW1 due (Sat 9/18), QNA2 out

9/19 Linear Models: From Generative to Discriminative Classifiers; Linear models for classification (and regression); properties of linear models; geometry of linear models; use for classification; score; functional margin; finding the best linear classifier; loss functions; log-logistic loss. pdf
  • Bishop Chapter 4.1
9/21 Logistic Regression (LR) 1: Probabilistic discriminative models; logistic regression as linear probabilistic classifier; decision boundaries; M(C)LE and M(C)AP models for probabilistic parameter estimation for LR; optimization problem; concave and convex functions; local and global optima; introduction to partial derivatives and gradient vectors. pdf
9/23 Gradient-based optimization, Logistic Regression 2: Recap on concave and convex functions, local and global optima; partial derivatives and their calculation; gradient vectors; geometric properties; general framework for iterative optimization; gradient descent / ascent; design choices: step size, convergence check, starting point; zig-zag behavior of gradients and function conditioning properties; sum functions and stochastic approximations of gradients; batch, incremental, and mini-batch GD; properties of stochastic GD; gradient ascent for LR-MLE; gradient ascent for LR-MAP; MCAP case for gradient ascent with Gaussian priors; logistic regression with more than two classes, softmax function; decision boundaries for different classifiers; linear vs. non-linear boundaries; number of parameters in LR vs. Naive Bayes; asymptotic results for LR vs. NB; overall comparison between LR and NB. pdf Notebook QNA2 due, HW2 out

9/26 Support Vector Machines (SVM) 1: Linear classifiers (deterministic); review of score and functional margin, geometric margin; max-margin classifiers; linearly separable case and hard-margin SVM optimization problem; support vectors and relationship with the weight vector; non-linearly separable case and use of slack variables for elastic problem formulation; margin and non margin support vectors; penalty / tradeoff parameter; SVMs and hinge loss. Learning linear models for classification; geometric margins and SVMs; constrained optimization problem for max margin separators; slack parameters. Review of soft-margin SVM; SVM and hinge loss. pdf
9/28 Support Vector Machines 2: Solution of the SVM optimization problem; relaxations; Lagrangian function and dual problem; Lagrange multipliers and their interpretation; solution of the dual for the hard-magin case; functional relations between multipliers and SVM parameters; solving the non-linearly separable case (soft-margin). pdf Solving SVM optimization problems
9/30 Support Vector Machines 3: Hinge loss and soft-margin SVM; regularized hinge loss; properties of linear classifiers.
Review for midterm
pdf

10/3 Midterm Exam
10/5 Linear models and Gradient methods, Feature transformation: Practice of gradient methods; stochastic gradient descent for loss functions; effect of step size and batch size; training linear classifiers, hinge loss; SVMs at work; support vectors; decision boundaries; comparison with other classifiers; linear models and feature transformations; meaning of high order features; polynomial features; feature basis functions; good and bad properties of feature transformations; computational issues. Notebook

pdf
10/7 Kernel Methods, SVM Kernelization: Dual SVM problem formulation for the non-linearly separable case (soft-margin) and dot products; dot products and inner products; generalities on Hilbert's spaces; kernel functions and implicit feature map definition; kernels and similarity measures; Hilbert spaces and inner products; Mercer's conditions for kernels; kernel matrix; kernelization and modularity; kernel trick, kernelizing algorithms; examples of kernel functions; RBF kernel and infinite dimensionality; SVM kernelization; Kernels in logistic regression. pdf HW2 due, QN3 out

10/10 Fall break
10/12 Fall break
10/14 Fall break

10/17 Regression Models, Linear Regression 1: Regression problems, examples and taxonomy; empirical predictor; hypothesis class and loss functions; linear regression with squared losses: Ordinary Least Squares (OLS); problem formulation and solution approaches; predictions as a weighted linear combination of labels; linear regression for non-linear data; feature spaces; basis functions as features; solution using feature functions; examples fwith polynomial regression. pdf
10/19 Linear Regression 2: Issues related to solving OLS: matrix inversion, computations, numerical instabilities; normal equations vs. SGD vs. algebraic methods; controlling model complexity (and avoiding singularities) using regularization. pdf
10/21 Linear Regression 3: Effects of different regularization approaches; Ridge regression as constrained optimization, shrinking of weights; Lasso regression as constrained optimization, shrinking and zeroing of weights; comparison among Lp-norm regularizations; kernelization of linear regression. pdf QN3 due, HW3 out

10/24 Kernelized Regression: Recap on kernels, inner products; dual form of ridge regression using matrix inversion lemma; complexity of primal and dual forms; derivation and solution of the dual problem; dual variables as Lagrange multipliers; learned weights as weighted average of training examples; dual solution and feature maps; kernelization of the dual; kernels for learning and predictions; complexity of solutions using explicit and kernelized feature maps; concepts about Support Vector Regression (optional). pdf
  • Bishop Chapter 3.3, 3.2, 2.5, 6.2, 6.3.1
10/26 (extra) Probabilistic regression models: Statistical models of linear regression; discriminative modeling of the conditional distribution of the outputs; white Gaussian noise to explain variations; maximization of log-likelihood vs. solution of OLS; M(C)LE as unregularized least squares; use of priors on parameters; M(C)AP estimate as regularized LS; Gaussian prior and Ridge regression; Laplace prior and Lasso regression.
pdf
  • Bishop Chapter 3
10/26 Non-parametric / Kernel Regression: Closed-form solutions for prediction from linear models, weighted linear combination of label; smoothing kernels; examples with Gaussian and other basis functions; localization and weighted averages of observations; bias-variance tradeoff; kernel regression as non-parametric regression; Nadaraya-Watson estimator; examples of kernels; role and impact of bandwidth; k-NN as another non-parametric estimator; kernel regression and least squares formulations. pdf
  • Bishop Chapter 3.3, 3.2, 2.5, 6.2, 6.3.1
10/27 Ensemble methods, Boosting, Bagging, Random Forests: Ensemble models, Bagging, Boosting, Random forests: General ideas behind combining models; voting/averaging vs. stacking models; bagging and boosting as forms of combining different experts; bagging: construction of the datasets by bootstrapping, properties of the base model, variance reduction goals, aggregation by averaging; random forests as bagging with randomization of the features of each model; boosting: sequential generation of the weighted datasets, base model as a weak learner, goals of combining multiple weak learners, how to compute voting weights in AdaBoost; decision stumps as weak classifiers; analysis and properties of AdaBoost; robustness to overfitting. pdf
10/28 Neural Networks 1: Linear units and perceptron; perceptron algorithm and properties; from perceptrons to artificial neural networks, biological analogy; structure of a unit; multi-layerd feed-forward architectures (MLP); recurrent network models; sigmoid units; other activation functions; hidden layers and hierarchical feature learning and propagation; matrices and network parameters; basic overview of properties, design choices, concepts about overfitting and complexity. pdf

10/31 Neural networks 2: NN as composite functions; functional form of a NN; visualization of the output surface; loss minimization problem in the weight space; non-convex optimization landscape for the error surface; stochastic and batch gradient descent; backpropagation and chain rule; backpropagation for a logistic unit; backpropagation for a network of logistic units; forward and backward passes in the general case; properties and issues of backpropagation; design choices (momentum, learning rate, epochs); weight inizialization. pdf
11/2 Neural Networks 3: Overfitting and generalization issues; approaches for to regularization; overview of model selection and validation approaches using NN; design choices; cross-entropy loss; softmax activation layer; number of trainable parameters; sgd and epochs; issues with the choice of the activation function, sigmoid/tanh units and vanishing gradients; limitations of fully connected MLPs; general ideas about exploiting structure and locality in input data in convolutional neural networks; receptive fields; convolutional filters and feature maps; weight sharing; invariant properties of features (to be continued). pdf
11/4 Neural Networks 4: Convolutional filters and feature maps; weight sharing; invariant properties of features; pooling layers for subsampling; incremental and hierachical feature extraction by convolutional and pooling layers; output layer and softmax function; examples of CNN architectures; concepts and implementation of autoencoders for dimensionality reduction, compression, denoising; general ideas about transfer learning and generative networks. pdf

11/7 Unsupervised Learning - Dimensionality Reduction (Principal Component Analysis, Autoencoders): Overview of unsupervised learning tasks; large feature spaces and curse of dimensionality; dimensionality reduction by feature selection and and latent features; simple general model for dimensionality reduction / compression; subspace spanned by a vector basis; key ideas behind principal component analysis (PCA); representation of data; variance captured by projections; definition of mininum variance directions; PCA algorithm; examples; limitations of PCA, Kernel PCA; dimensionality reduction using Autoencoders (recap from previous lecture). pdf HW3 due, HW4 out
11/9 Unsupervised learning - Data Clustering: Characterization of clustering tasks; types of clustering; (flat) K-means clustering problem; role of centroids, cluster assignments, Voronoi diagrams; (naive) K-means algorithm, examples; computational complexity; convergence and local minima; K-means loss function; alternating optimization (expectation-maximization); assumptions and limitations of K-means; illustration of failing cases; kernel K-means; soft clustering; relatinship to vector quantization and use of clustering for lossy compression; hierachical clustering, linakge methods, assumptions, computational complexity. pdf
11/11 Recitation - Dimensionality reduction methods, Clustering:

11/14 Unsupervised learning - Probabilistic clustering, Latent variable models, Mixture models, Expectation-Maximization 1: Probabilistic clustering and limitations of hard partitioning methods; mixture models and density estimation; modeling with latent variables; Gaussian Mixture Models (GMMs); MLE for parameter estimation in GMMs; GMMs solutions with complete data, form of decision boundaries; relationships between K-Means solutions and GMMs solutions; from complete data to latent data; MLE for latent data and parameter estimation in GMMs, problem formulation. pdf
  • Bishop Chapters 9.2-9.4
11/16 Unsupervised learning - Probabilistic clustering, Latent variable models, Mixture models, Expectation-Maximization 2: MLE for latent data and parameter estimation in GMMs; concepts and properties of Expectation-Maximization (EM) as iterative alternating optimization; EM for GMMs and probabilistic clustering; general form of EM for likelihood function optimization in latent variable models; Q function as lower bound of likelihood; formalism and concepts behind the EM approach; properties and limitations. pdf
  • Bishop Chapters 9.2-9.4
11/18 Nonparametric Density Estimation: Density estimation problem; parametric vs. non-parametric approaches; histogram density estimation; role of bin width; bias-variance tradeoff; general form of the local approximator; fixing the width: kernel methods; Parzen windows; smooth kernels; finite vs. inifinite support; fixing the number of points: k-NN methods; comparison between the approaches; role of the bandwidth and bias-variance tradeoff. pdf
  • Bishop Chapter 2.5

11/21 Learning Theory I: Needs for bounds on generalization errors; PAC model bounds; sample complexity; consistent but bad hypotheses; derivation of PAC Haussler bound; use of a PAC bound; limitation of Haussler's bound; Hoeffding's bound for a hypothesis which is not consistent; PAC bound and Bias-Variance tradeoff; computing the sample complexity; sample complexity for the case of decision trees; DT of fixed width vs. number of leaves; sample complexity and number of points that allow consistent classification. pdf HW4 due, QNA4 out
11/23 Learning Theory II: PAC bounds on continuous hypothesis spaces; set shattering; VC dimension; VC dimension for linear models, decision stumps, axis-aligned rectangles, circles, ellipsis; generalization error bound and VC dimension; tightness of the bound; bias-variance and VC-dimension; limitations of the VC dimensions. pdf
11/25 Learning Theory III, General review

11/28 Overview of ML scenarios, Q& A: ALL PDFs QN4 due (Nov 29)
12/4 Final Exam

Homework Assignments

Topic Files Due Dates
Homework 1: k-NN, Model selection, Decision trees, Bayes optimal classifier, MLE/MAP/Bayes, Naive Bayes - Sep 18
Homework 2: Logistic regression, Decision boundaries, Gradient methods, Support Vector Machines, Kernelization - Oct 7
Homework 3: Linear and nonlinear regression models, Ensemble models, Neural networks - Nov 7
Homework 4: Deep networks, Unsupervised Learning (Dimensionality reductition, Clustering, Mixture models, Non-parametric density estimation) - Nov 21


Homework Policies

  • Homework is due by the posted deadline. Assignments submitted past the deadline will incur the use of late days.

  • You have 6 late days in total, but cannot use more than 2 late days per homework or quiz. No credit will be given for an assignment submitted more than 2 days after the due date. After your 6 late days have been used you will receive 20% off for each additional day late.

  • You can discuss the exercises with your classmates, but you should write up your own solutions, both for the theory and programming questions.

  • Using any external sources of code or algorithms or complete solutions in any way must have approval from the instructor before submitting the work. For example, you must get instructor approval before using an algorithm you found online for implementing a function in a programming assignment.

  • Violations of the above policies will be reported as an academic integrity violation. In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed. Information about academic integrity at CMU may be found at https://www.cmu.edu/academic-integrity. Please contact the instructor if you ever have any questions regarding academic integrity or these collaboration policies.

Exam dates and policies

The class includes both a midterm and a final exam. Both the exams will include theory and pseudo-programming questions. During exams students are only allowed to consult 1-page cheatsheet (written in any desired format). No other material is allowed, including textbooks, computers/smartphones, or copies of lecture handouts.

The midterm exam is set for October 3.

The final exam is set for December 4.

Office Hours

Name Email Hours Location
Gianni Di Caro gdicaro@cmu.edu Thursdays 4:15pm-5:30pm + pass by my office at any time ... M 1007
Eduardo Feo-Flushing efeoflus@andrew.cmu.edu TBD M 1004