With the virtual explosion in the magnitude of data being generated across the world -- in fields as diverse as astronomy, biology, geology, and even defense -- there is an urgent need for methods that can analyze such data and transform it into meaningful scientific conclusions. Machine learning, one of the fastest growing fields in computer science, holds the promise of providing such methods. For example, machine learning techniques are being used in computer vision to develop face recognition systems, in computational biology to discover new genes, and in drug discovery to prioritize chemical structures for screening.
Machine learning is a highly interdisciplinary field that brings together techniques from a variety of mathematical and engineering disciplines, including in particular computer science, statistics, and mathematical optimization, in order to analyze complex data.
It is anticipated that future developments in machine learning will depend on a stronger understanding of the mathematical optimization tools that form the core of many modern machine learning algorithms, and on expanding the range of problems for which machine learning algorithms can be developed, for example allowing for situations involving multiple players (as is often the case with the internet or the healthcare industry) by incorporating techniques from game theory. In turn, machine learning techniques can be used for portfolio selection in optimization, and for mechanism design in game theory.
To this end, this symposium aims to bring together researchers from India and the US to share their perspectives on both recent advances in the fields of machine learning, game theory, and optimization, and challenges that lie ahead. Our hope is that the interactions initiated at this symposium will act as a catalyst for further Indo-US collaborations, particularly at the interface of these exciting disciplines.
Friday November 12
Venue: Indian Institute of Science, Faculty Hall
Registration: Open to all
Saturday November 13
Venue: Lalit Ashok Hotel, Convention Hall
Registration: By invitation
Indian Institute of Science
Learning to Rank: Optimization of Ranking Performance Measures
Ranking problems have received increasing attention in machine learning and data mining in recent years. Such problems arise in a variety of domains, ranging from recommendation systems to information retrieval and from computational biology to drug discovery. In this talk, I will briefly survey some of the popular approaches to ranking in machine learning, including both basic algorithms and more recent algorithms that focus on optimizing ranking accuracy at the top of the list, and will describe a new algorithm called the Infinite Push that is designed to optimize accuracy at the absolute top of a ranked list. I will conclude with some examples of applications of ranking methods in drug discovery, in information retrieval, and in computational biology.
Shivani Agarwal is currently an Assistant Professor in the Department of Computer Science & Automation at the Indian Institute of Science. Prior to this she was a postdoctoral lecturer and associate in the Computer Science and Artificial Intelligence Laboratory at MIT. She obtained her PhD in Computer Science at the University of Illinois, Urbana-Champaign, where she received the Liu Award for her research; an MA in Computer Science at Trinity College, University of Cambridge, where she was a Nehru Scholar; and a BSc with Honors in Mathematics at St Stephen's College, University of Delhi. Her research interests include machine learning and learning theory, in particular the study of ranking and other new learning problems, as well as applications of machine learning methods, particularly in the life sciences. More broadly, she is excited by research at the intersection of computer science, mathematics, and statistics, and its applications in scientific discovery.
Indian Institute of Science
Collective Relational Clustering
Over the last decade or so, abundant data that is relational in nature has been
generated by many different applications. Hyperlink data from the internet, citation
data from scientific literature, data from social and telecommunication networks,
biological interaction data, retail data on customer shopping patterns are some
examples. The recent years have seen a lot of research in the emerging area of
unsupervised learning or clustering over such relational data. Traditional
clustering approaches based on features of individual data items are insufficient
for this problem. In contrast, in collective relational clustering, clustering
decisions are made jointly over relational neighborhoods in the network. In this
talk, I will first introduce the collective relational clustering problem, and touch
upon the different approaches that have been proposed. Then I will talk about the
probabilistic approach in more detail. Specifically, I will present probabilistic
graphical models that capture cluster relationships using latent groups over the clusters. Inference
in such models is a challenge, and I will describe efficient approximate algorithms
based on sampling methods.
Indrajit Bhattacharya is an Assistant Professor in the Department of Computer
Science and Automation at the Indian Institute of Science, Bangalore. While his
broad area of interest is machine learning and data mining, his research focus is on
models for clustering in multi-relational data mentioning different entities and
their relationships. He received his PhD in Computer Science from the University of
Maryland, College Park in December 2006. Prior to joining IISc in June 2010, he
worked as a Research Scientist at IBM Research - India since April 2007.
Indian Institute of Science
On the Design of Kernels
Methods based on Kernel functions have emerged as powerful tools for several learning tasks.
The choice of Kernel function is an important area of research
which involves Convex Optimization and Statistics.
In the Multiple Kernel Learning (MKL) framework one chooses a kernel
function from a conic combination of multiple pre-specified kernels.
Existing results apply to either the sparse or the non-sparse
setting of MKL. We will discuss a novel formulation and
efficient algorithm, based on Mirror Descent procedure,
which works for both the settings. In reality it is often easier
to specify similarity functions rather than kernels.
We report results which address the open question of
kernel choice given multiple similarity functions.
Time permitting we will present our progress on the issue of
robust classifiers when the kernel is corrupted by noise.
Chiranjib Bhattacharyya is an Associate Professor in the
Dept of Computer Science and Automation, Indian Institute of
Science. His research interests are in Machine Learning.
For more details of his work please
Carnegie Mellon University
Approximate Clustering Without the Approximation Algorithm, and a New Approach to Optimization
Many important optimization problems unfortunately turn out to be
NP-hard, even to approximate well. In this talk, I will discuss an
approach---especially for problems of identifying hidden patterns in
data or predicting behavior in a multiagent system---that in some
cases can bypass these hardness results by using implicit assumptions
in the problem formulations. In particular, often in these settings
the optimization objective is a proxy for some other underlying goal.
For example, a distance-based clustering objective (such as k-means)
for clustering protein sequences is generally a proxy for a true goal
of correctly identifying the functions of the proteins; computing an
(approximate) equilibrium may be primarily a proxy for a true goal of
predicting behavior. Making explicit the assumption (or hope) that a
good solution to the formal objective implies a good solution to the
final goal yields structure that an algorithm can potentially use. As
one example, for clustering we show that if a c-approximation to the
k-means or k-median objective would be sufficient to yield solutions
that are epsilon-close to a desired target clustering, then we can
produce clusterings that are O(epsilon)-close to the target, even for
values c for which obtaining a c-approximation is NP-hard. In
particular, we achieve this guarantee for *any* constant c>1. We also
discuss initial steps in this direction for the problem of estimating
Nash equilibria, and connections to a related condition of
This talk includes work joint with Nina Balcan, Pranjal Awasthi,
Anupam Gupta, Or Sheffet, and Santosh Vempala.
Avrim Blum is Professor of Computer Science at Carnegie Mellon
University. He received his PhD from MIT in 1991 under Prof. Ronald
Rivest. His main research interests are in Machine Learning Theory,
Approximation Algorithms, and Algorithmic Game Theory, and he is also
known for his work in AI Planning. He has served as Program Chair for
the IEEE Symposium on Foundations of Computer Science (FOCS) and the
Conference on Learning Theory (COLT), and on the organizing committee
for the U.S. National Academy of Sciences Frontiers of Science
Symposium. He was recipient of the Sloan Fellowship, the NSF National
Young Investigator Award, and the ICML/COLT 10-year best paper award,
and is a Fellow of the ACM.
Yahoo! Labs Bangalore
Market Design for Data Labeling
In many practical situations such as web page annotation, crowdsourcing, and
medical image diagnostic, a learning agent needs to train a classifier
through data coming from multiple human annotators where each annotator
possibly makes mistakes in labeling the data and also charges a price for
providing data labeling service. In this talk, I will present a simple and
scalable PAC learning scheme for two class classification problem where
training examples are subjected to noisy labels. The noisy labeled examples
are obtained from two annotators where each annotator is characterized by
its classification noise rate and annotation price per example. The learning
algorithm is given a fixed budget and the algorithm needs to decide the
purchase plan for data labeling services from each of the two annotators.
The goal is to keep the data labeling cost within a given budget while
maintaining the quality of the learnt concept at a minimum specified PAC
level. For this scenario, given a concept class and noise rates, we derive
theoretical bounds on the number of examples (labeled from each annotator),
learning budget, and annotation prices from PAC learning perspective. We
also analyze a Data Labeling Marketplace which naturally arises among
annotators (service providers) and learning agent (service consumer). We
also analyze a Data Labeling Marketplace which naturally arises among
annotators (service providers) and learning agent (service consumer). We
formulate a non-cooperative game to model the joint decision making problems
faced by the annotators and the learning agent and derive an insightful
result for market equilibrium in this setting.
Microsoft Research India
The k-means algorithm is widely used. No proof of its convergence is known. We will supply one under the assumption that "most'' points are "in proximity'' to the centres of their clusters, a condition which will be formalized. We show that in most probabilistic models so far studied,
this assumption holds and so this proof subsumes earlier results.
Joint work with Amit Kumar, IIT Delhi.
University of California, San Diego
Learning Multi-Modal Similarity: A Novel Multiple Kernel Learning Technique
In many applications involving multimedia data, the definition of similarity
between items is integral to several key tasks, e.g., nearest-neighbor
retrieval, classification, or visualization. Data in such regimes typically
exhibits multiple modalities, such as acoustic and visual content of a
video, or audio clips, web documents and art work describing musical
artists. Integrating such heterogeneous data to form a holistic similarity
space is therefore a key challenge to be overcome in many real-world
We present a novel multiple kernel learning (MKL) technique for integrating
heterogeneous data into a single, unified similarity space. Instead of finding a weighted linear
combination of base kernels, as in the original MKL formulation, we learn a
concatenation of linear projections, where each projection extracts the
relevant information from a base kernel's feature space. This new
formulation results in a more flexible model than previous methods, that can
adapt to the case where the discriminative power of a kernel varies over the
data set or feature space.
First, we apply this framework to derive a multiple kernel nearest neighbor
algorithm, and we show it allows to outperform state-of-the-art methods for
object localization on a number of standard evaluation databases. Second, in
an application to music recommendation, we leverage this MKL framework to
learn an optimal ensemble of kernel transformations, such that the
resulting embedding conforms to human perception of music similarity.
Gert Lanckriet received a Master's degree in Electrical Engineering from the Katholieke
Universiteit Leuven, Leuven, Belgium, in 2000 and the M.S. and Ph.D. degrees in
Electrical Engineering and Computer Science from the University of California, Berkeley
in 2001 respectively 2005. In 2005, he joined the Department of Electrical and Computer
Engineering at the University of California, San Diego, where he heads the Computer
Audition Laboratory. He was awarded the SIAM Optimization Prize in 2008 and is the
recipient of a Hellman Fellowship and an IBM Faculty Award. His research focuses on the
interplay of convex optimization, machine learning and applied statistics, with
applications in computer audition and music information retrieval.
Indian Institute of Science
Design of Mechanisms for Dynamic Environments
Success in e-business is based on the fact that offerings can be
made more relevant and reliable by using historical data to rank, filter,
recommend, and in general learn characteristics of users and products.
When these data are held by strategic agents, the classical learning
algorithms fail to elicit the data truthfully from the agents.
Motivated by this, in this talk, we first provide a glimpse of some
current approaches for developing truthful mechanisms with dynamic
agents and learning. We next describe some of our current work in
this area (which is quite preliminary at this stage). We also provide
an outlook for future work and opportunities for Indo-US collaboration
in this area.
Y. Narahari is currently Professor and Chair at the Department of Computer
Science and Automation, Indian Institute of Science, Bangalore.
The focus of his current research is to apply Game Theory and Mechanism
Design to Internet and Network Economics, Social Networks, and Electronic
Commerce Problems. He is the lead author of a research monograph entitled
Game Theoretic Problems in Network Economics and Mechanism Design Solutions"
published by Springer, London in 2009.
David C. Parkes
The Interplay of Machine Learning and Mechanism Design
In the economic theory of mechanism design, the goal is to elicit private information
from each of multiple agents in order to select a desirable system wide outcome, and
despite agent self-interest in promoting individually beneficial outcomes. Auctions
provide a canonical example, with information elicited in the form of bids, and an
allocation of resources and payments defining an outcome. Indeed, one aspect of the
emerging interplay between machine learning (ML) and mechanism design (MD) arises by
interpreting auctions as a method for learning agent valuation functions. In addition to
seeking sufficient accuracy to support optimal resource allocation, we require for
incentive compatibility that prices are insensitive to the inputs of any individual agent
and find an interesting connection with regularization in statistical ML. More broadly,
ML can be used for de novo design, in learning payment rules with suitable incentive
properties. Ideas from MD are also flowing into ML. One example considers the use of
mechanisms to elicit private state, reward and transition models, in enabling coordinated
exploration and exploitation in multi-agent systems despite self-interest. Another
application is to supervised learning, where labeled training data is elicited from
self-interested agents, each with its own objective criterion on the hypothesis learned
by the mechanism. Looking ahead, a tantalizing challenge problem is to adopt incentive
mechanisms for the design of robust agent architectures, for example in assigning
internal rewards that promote modular intelligent systems.
David C. Parkes is Gordon McKay Professor of Computer Science in the School of
Engineering and Applied Sciences at Harvard University. He was the recipient of the NSF
Career Award, the Alfred P. Sloan Fellowship, the Thouron Scholarship and the Harvard
University Roslyn Abramson Award for Teaching. Parkes received his Ph.D. degree in
Computer and Information Science from the University of Pennsylvania in 2001, and an
M.Eng. (First class) in Engineering and Computing Science from Oxford University in 1995.
At Harvard, Parkes leads the EconCS group and teaches classes in artificial intelligence,
optimization, and topics at the intersection between computer science and economics.
Parkes has served as Program Chair of ACM EC'07 and AAMAS'08 and General Chair of ACM
EC'10, served on the editorial board of Journal of Artificial Intelligence Research, and
currently serves as Editor of Games and Economic Behavior and on the boards of Journal of
Autonomous Agents and Multi-agent Systems and INFORMS Journal of Computing. His research
interests include computational mechanism design, electronic commerce, stochastic
optimization, preference elicitation, market design, bounded rationality, computational
social choice, networks and incentives, multi-agent systems, crowd-sourcing and social
Indian Institute of Technology, Madras
Finding Approximate Symmetries of Markov Decision Processes
IBM Research India
Two-sided Matching Mechanisms for Project Fulfillment
Workforce of a IT service organization consists of varied and highly
skilled practitioners, who are assigned to specific projects with matching
requirements. Fulfillment process of assigning practitioners to projects is
a daily activity, that should take into account individual and organization
level objectives and constraints. A practitioner would prefer to be
assigned to a project that matches her skill sets and future aspirations.
Project managers prefer practitioners with matching skills and attributes
for higher productivity. The organization has to maintain an
optimal/allowable bench mix (unassigned practitioners), higher fulfillment
rate, and overall higher productivity. In this talk, we review various
two-sided matching mechanisms for practitioner-project fulfillment.
Kameshwaran is a Researcher with Analytics and Optimization group in IBM
India Research Laboratory, Bangalore. Prior to joining IBM Research he held
similar positions at the The Logistics Insitute - Asia Pacific (NUS,
Singapore), INRIA Grand Est (France) and the Indian School of Business
(Hyderabad). Kamesh obtained his Ph.D. in Computer Science from IISc,
Bangalore in 2004 and his research interests are in Mathematical
Programming, Game Theory, and Random Graphs.
Massachusetts Institute of Technology
Message Passing Networks: Inference, Optimization and Games
Simple, distributed and iterative algorithms, popularly known as
message-passing, have emerged as an architecture of choice for
a variety of engineered networks and serve as canonical behavioral
model for natural networks. Their effectiveness lies in their simplicity.
In this talk, I will try to argue in favor of such algorithms by discussing
examples from statistical inference, optimization and games.
In the first part, I will discuss strengths and limitation of the
popular message passing heuristic called belief propagation (BP) for
inference and optimization. BP arose as distributed, dynamic
programming based approximation for tree-like graphs. However, it
seems to work quite well empirically even when graphical models
are far from tree-like. I will attempt to explain this unexpected
behavior by drawing connection between BP, Linear Programing
and continuous optimization.
In the second part, I will discuss a class of message-passing
algorithms that have been thought of as behavioral model of
agents/humans in "networks''. I will use "mixing time" of appropriate
Markov chain as a metric to refute this as a reasonable model
and propose an alternative model by introducing notion of 'herding'.
Devavrat Shah is currently a Jamieson career development associate
professor with the department of electrical engineering and computer
science, MIT. He is a member of the Laboratory for Information and
Decision Systems (LIDS). His research focus is on theory of large
complex networks which includes network algorithms, stochastic
networks, network information theory and large scale statistical
inference. He was co-awarded the best paper awards at the IEEE
INFOCOM '04, ACM SIGMETRICS/Performance '06; and supervised
best student paper awards at Neural Information Processing Systems
'08 and ACM SIGMETRICS/Performance '09. He received 2005 George
B. Dantzig best disseration award from the INFORMS. He received the
first ACM SIGMETRICS Rising Star Award 2008 for his work on network
scheduling algorithms. He was recently awarded the 2010 Erlang Prize from
INFORMS which is given to a young researcher for outstanding contributions
to applied probability. He is currently an associate editor of Operations
There will be an opportunity for a selected number of students, postdocs, and young faculty (from India) to present posters on their research to symposium participants. To apply, please submit a one-page abstract on your research and a current CV (as PDF documents) to firstname.lastname@example.org by October 18 2010. Decision notifications will be sent by October 22 2010.
- Rama B., Indian Institute of Science
- Sahely Bhadra, Indian Institute of Science
- Anunay Biswas, Indian Institute of Technology, Madras
- Dileep A. D., Indian Institute of Technology, Madras
- Vikas Garg, IBM Research India
- Sujit Gujar, Indian Institute of Science
- Sandeep Kumar, Indian Institute of Technology, Madras
- Achintya Kundu, Indian Institute of Science
- Naresh Manwani, Indian Institute of Science
- Ruta Mehta, Indian Institute of Technology, Bombay
- Adway Mitra, Indian Institute of Science
- Vinod Nair, Yahoo! Labs Bangalore
- Ramasuri Naranayam, Indian Institute of Science
- Yamuna Prasad, Indian Institute of Technology, Delhi
- Arun Rajkumar, Indian Institute of Science
- Raman Sankaran, Indian Institute of Science
- Veena T., Indian Institute of Technology, Madras
- Neeraja Yadwadkar, NetApp
We recommend a poster size of up to 36 x 48". If your poster deviates significantly from these guidelines, please contact us.
There are a limited number of travel awards available for students, postdocs, and young faculty from outside Bangalore (within India) who are interested in presenting posters on their research at this symposium. To apply, please submit a one-page abstract on your research and a current CV (as PDF documents) to email@example.com. Applications must be received by October 18 2010. Awardees will be notified by October 22 2010.
Map of Bangalore with symposium-related landmarks marked:
Map of IISc with symposium-related landmarks marked:
For tourism-related information, see here.
There is also a wealth of visitor information available here.
We gratefully acknowledge support from the Indian Institute of Science and from the Indo-US Science and Technology Forum in organizing this event.