E0 370: Statistical Learning Theory

August - December 2013

Department of Computer Science & Automation
Indian Institute of Science

[Course Description]  [Announcements]  [Assignments]  [Lectures] 

Course Information

Instructor: Prof. Shivani Agarwal (shivani@csa)
Meeting times/venue: Tu-Th 11:30am-1:00pm, CSA 252 (Multimedia Classroom)
First class meeting: Tue Aug 6

Course Description

This is an advanced course on learning theory suitable for PhD students working in learning theory or related areas (e.g. information theory, game theory, computational complexity theory etc) or 2nd-year Masters students doing a machine learning related project that involves learning-theoretic concepts. The course will consist broadly of three parts and will cover roughly the following topics: The course will contain approximately 30% new material compared to the 2011 offering (including greater emphasis of recent results on statistical consistency of surrogate risk minimization algorithms, for both binary and various types of multiclass learning problems, role of various types of loss functions and calibration, and multi-armed bandits).

References: The course will not follow any single textbook; lecture notes will be made available online and pointers to relevant literature will be provided. In addition, the following references may be useful as supplementary material: Prerequisites: A strong foundation in probability and statistics, and a previous course in machine learning (E0 270 or similar). Some background in linear algebra and optimization will be helpful.

Assessment (approximate allocations):
50% - project
24% - 3 reading assignments
16% - scribing of lecture notes
10% - participation



Template for assignment submissions (change dates etc): [hw1-template.tex][hw1-template.pdf]

Guidelines: Your writeups on the papers below should focus on the following: problem formulation, main contribution(s) compared to previous work, main result(s), main proof ideas/techniques, possible shortcomings/extensions/open questions. You are strongly encouraged to use the notation used in class - this will also convey your understanding of the material. You need not understand full details of all the proofs, but should be able to understand and communicate the main ideas/techniques. You may also like to point out any typos/errors you might find in the papers. You are welcome to discuss the papers with each other, but the writeups should be your own. Please remember to acknowledge discussions with any colleagues in the writeup.

Assignment 1

Read the following two papers and write a brief 2-page writeup on any one of these papers of your choice (due before class on Tue Sep 10):
  1. Sharp Generalization Error Bounds for Randomly-projected Classifiers.
    Robert Durrant, Ata Kaban.
    ICML 2013.
  2. Collective Stability in Structured Prediction: Generalization from One Example.
    Ben London, Bert Huang, Ben Taskar, Lise Getoor.
    ICML 2013.

Assignment 2

Read the following two papers and write a brief 2-page writeup on any one of these papers of your choice (due before class on Tue Oct 15):
  1. On the Consistency of Ranking Algorithms.
    John Duchi, Lester Mackey, Michael Jordan.
    ICML 2010 (Best paper award).
  2. Convex Calibrated Surrogates for Low-Rank Loss Matrices with Applications to Subset Ranking Losses.
    Harish G. Ramaswamy, Shivani Agarwal, Ambuj Tewari.
    Preprint, to appear in NIPS 2013.

Assignment 3

Read the following two papers and write a brief 2-page writeup on any one of these papers of your choice (due before class on Tue Nov 12):
  1. Regret Minimization for Branching Experts.
    Eyal Gofer, Nicolo Cesa-Bianchi, Claudio Gentile, Yishay Mansour.
    COLT 2013.
  2. Online Learning under Delayed Feedback.
    Pooria Joulani, Andras Gyorgy, Csaba Szepesvari.
    ICML 2013.

Lectures (Tentative Schedule)

Lecture Date Topic 2013 Lecture Notes 2011 Lecture Notes/Additional References Notes
1 Tue Aug 6 Introduction
Probabilistic and game-theoretic settings of machine learning
  2011 Lecture 1 notes
Generalization error bounds
2 Thu Aug 8 Uniform convergence and growth function/VC-entropy   2011 Lecture 3 notes
3 Tue Aug 13 VC-dimension and Sauer's lemma   2011 Lecture 4 notes
  Thu Aug 15 Institute Holiday (Independence Day)      
4 Tue Aug 20 Covering numbers, pseudo-dimension, and fat-shattering dimension   2011 Lecture 5 notes
Project proposal due
5 Thu Aug 22 Margin analysis   2011 Lecture 6 notes
6 Tue Aug 27 Rademacher averages   2011 Lecture 7 notes
7 Thu Aug 29 Algorithmic stability   2011 Lecture 8 notes
8 Tue Sep 3 Bounds for model selection; uniform margin bounds   2011 Lecture 9 notes
Statistical consistency and learnability
9 Thu Sep 5 Consistency of ERM and SRM methods   2011 Lecture 10 notes
10 Tue Sep 10 Learnability: basic definitions, examples, results   2011 Lecture 11 notes
Assignment 1 due
11 Thu Sep 12 Learnability: more examples and results   2011 Lecture 12 notes

Hellerstein and Servedio, Theoretical Computer Science 384(1), 2007
12 Tue Sep 17 Consistency of nearest neighbor methods   2011 Lecture 16 notes
  Thu Sep 19 No class (work on project)     First project progress report due on Fri Sep 20
13 Tue Sep 24 Consistency of surrogate risk minimization methods for binary classification using proper losses 13.pdf
14 Thu Sep 26 Consistency of surrogate risk minimization methods for binary classification using classification-calibrated losses 14.pdf
15 Tue Oct 1 Consistency of surrogate risk minimization methods for multiclass 0-1 classification 15.pdf
16 Thu Oct 3 Consistency of surrogate risk minimization methods for general multiclass learning problems   Ramaswamy & Agarwal, NIPS 2012  
  Tue Oct 8 Project progress presentations      
Online learning and multi-armed bandits
  Thu Oct 10 No class (work on assignment/project; review perceptron and winnow algorithms for online binary classification)   2011 Lecture 18 notes
17 Tue Oct 15 Online regression (and classification) 17.pdf
  Assignment 2 due
18 Thu Oct 17 Online prediction from/allocation to experts 18.pdf
  Second project progress report due on Fri Oct 18
19 Tue Oct 22 Online convex optimization 19.pdf
Bubeck, Lecture Notes, 2011  
20 Thu Oct 24 Online-to-batch conversions 20.pdf
21 Tue Oct 29 Online learning and online-to-batch conversions for pairwise losses
Guest lecture by Dr. Purushottam Kar, MSR India
  Kar et al, ICML 2013  
  Thu Oct 31 No class (work on project)      
  Tue Nov 5 No class (work on project)      
22 Thu Nov 7 Stochastic multi-armed bandits 22.pdf
Bubeck & Cesa-Bianchi, Foundations & Trends in Machine Learning 5(1):1-122, 2012  
23 Tue Nov 12 Adversarial multi-armed bandits 23.pdf (Rakesh, Shiv Ganesh)
Bubeck & Cesa-Bianchi, Foundations & Trends in Machine Learning 5(1):1-122, 2012 Assignment 3 due
Home stretch
  Thu Nov 14 Final project presentations     Final project report due on Fri Nov 15