- Video / Presentation
To watch in full screen mode, start the video and click on the "UniCA POD" logoDownload the presentation
- Abstract
-
Despite stunning performances, state of the art machine learning approaches are often computational intensive and efficiency remains a challenge. Dimensionality reduction, if performed efficiently, provides a way to reduce the computational requirements of downstream tasks, but possibly at the expanses of the obtained accuracy. In this talk, we discuss the interplay between accuracy and efficiency when dimensionality reduction is performed by means of, possibly data dependent, random projections. The latter are related to discretization methods for integral operators, to sampling methods in randomized numerical linear algebra and to sketching methods. Our results show that there are number of different tasks and regimes where, using random projections and regularization, efficiency can be improved with no loss of accuracy. Theoretical results are used to derive scalable and fast kernel methods for datasets with millions of points.
- About the speaker
-
Lorenzo Rosasco is professor at University of Genova. He is also visiting professor at the Massachusetts Institute of Technology (MIT) and external collaborator at the Italian Technological Institute (IIT). He coordinates the Machine Learning Genova center (MaLGa) and lead the Laboratory for Computational and Statistical Learning focused on theory, algorithms and applicaiton of machine learning. He received his PhD in 2006 from the University of Genova, after being a visiting student at the Center for Biological and Computational Learning at MIT, the Toyota Technological Institute at Chicago (TTI-Chicago) and the Johann Radon Institute for Computational and Applied Mathematics. Between 2006 a and 2013 he has been a postdoc and research scientist at the Brain and Cognitive Sciences Department at MIT. He his a recipient of a number of grants, including a FIRB and an ERC consolidator.
http://web.mit.edu/lrosasco/www/