Multi-Task Learning: Theory, Algorithms, and Applications

Abstract

This tutorial gives a comprehensive overview of theory, algorithms, and applications of multi-task learning. Many real-world applications involve multiple related classification/regression tasks. For example, in the prediction of therapy outcome, the tasks of predicting the effectiveness of several combinations of drugs are related. In the prediction of disease progression, the prediction of outcome at each time point can be considered as a task and these tasks are temporally related. Traditionally these tasks are solved independently, ignoring the task relatedness. In multi-task learning, we learn these related tasks simultaneously by extracting appropriate shared information across tasks. Multi-task learning is especially useful when the training sample size for each task is small.

This tutorial focuses on introducing the necessary background for multi-task learning, presenting the popular multi-task learning techniques based on the structured regularization and other existing methods for modeling task relationships, demonstrating successful applications of these techniques in various application domains, introducing the efficient algorithms for solving the related optimization problems, and discussing recent advances and future trends in the area. The tutorial also introduces the multi-task learning package developed at Arizona State University.

Biographies of Authors

Jiayu Zhou is a computer science Ph.D. student at Arizona State University. His research interests include multi-task learning, data mining, healthcare analysis, especially Alzheimer’s disease and cancer research.
Jianhui Chen is a Research Scientist at GE Research Global. He received his Ph.D. in Computer Science from Arizona State University in 2011. His research interests lie in multi-task learning, kernel learning, dimension reduction, and bioinformatics. His paper on learning incoherent sparse and low-rank patterns from multiple tasks won the best paper honorable mention award at the Sixteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining in 2010.

Jieping Ye is an Associate Professor of the Department of Computer Science and Engineering at Arizona State University. He received his Ph.D. in Computer Science from University of Minnesota, Twin Cities in 2005. His research interests include machine learning, data mining, and biomedical informatics. He won the outstanding student paper award at ICML in 2004, the SCI Researcher of the Year Award at ASU in 2009, the NSF CAREER Award in 2010, the KDD best research paper award honorable mention in 2010, and the KDD best research paper nomination in 2011. He has given a tutorial on the subject of dimensionality reduction at SDM 2007 and a tutorial on the subject of sparse learning at SDM 2010.

 

Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+