Katya Scheinberg 10:30am-11:30am on January 23, 2022
Abstract: Continuous optimization is a mature field, which has recently undergone major expansion and change. One of the key new directions is the development of methods that do not require exact information about the objective function. Nevertheless, the majority of these methods, from stochastic gradient descent to "zero-th order" methods use some kind of approximate first order information. We will overview different methods of obtaining this information in different settings, including simple stochastic gradient via sampling, traditional and randomized finite difference methods and more. We will discuss what key properties of these inexact, stochastic order oracles and compare the cost and benefit of having access to gradient estimates versus function value estimates.
Bio: Katya Scheinberg is a Professor and Director of Graduate Studies at the School of Operations Research and Information Engineering at Cornell University. Prior to joining Cornell she was the Harvey E. Wagner Endowed Chair Professor at the Industrial and Systems Engineering Department at Lehigh University. She attended Moscow University for her undergraduate studies and received her PhD degree from Columbia University. She has worked at the IBM T.J. Watson Research Center as a research staff member for over a decade before joining Lehigh in 2010. Katya’s main research areas are related to developing practical algorithms (and their theoretical analysis) for various problems in continuous optimization, such as convex optimization, derivative free optimization, machine learning, quadratic programming, etc. In 2015, jointly with Andy Conn and Luis Vicente, she received the Lagrange Prize awarded jointly by SIAM and MOS. In 2019 she was awarded the Farkas Prize by Informs Optimization Society. Katya is currently the editor-in-chief of Mathematics of Operations Research, and a co-editor of Mathematical Programming.
Andrea Lodi 2:30pm-3:30pm on January 23, 2022
Abstract: In this talk, we discuss how a careful use of Machine Learning (ML) concepts can have an impact in primal heuristics for Mixed-Integer Programming (MIP). More precisely, we review several forms of integration of ML in the heuristic context and then we consider two applications in details. First, we design a data-driven scheduler for running both diving and large-neighborhood search heuristics in SCIP, one of the most effective open-source MIP solvers. Second, we incorporate a major learning component into Local Branching, one of the most well-known primal heuristic paradigms. In both cases, computational results show solid improvements over the state of the art.
Bio: Andrea Lodi is an Andrew H. and Ann R. Tisch Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion. He is a member of the Operations Research and Information Engineering field at Cornell University. He received his PhD in System Engineering from the University of Bologna in 2000 and he was a Herman Goldstine Fellow at the IBM Mathematical Sciences Department, NY in 2005–2006. He was a full professor of Operations Research at DEI, the University of Bologna between 2007 and 2015. Since 2015, he has been the Canada Excellence Research Chair in “Data Science for Real-time Decision Making” at Polytechnique Montréal. His main research interests are in Mixed-Integer Linear and Nonlinear Programming and Data Science and his work has received several recognitions including the IBM and Google faculty awards. Andrea is the recipient of the INFORMS Optimization Society 2021 Farkas Prize. He is the author of more than 100 publications in the top journals of the field of Mathematical Optimization and Data Science. He serves as Editor for several prestigious journals in the area. He has been the network coordinator and principal investigator of two large EU projects/networks, and, since 2006, consultant of the IBM CPLEX research and development team. Andrea Lodi is the co-principal investigator of the project “Data Serving Canadians: Deep Learning and Optimization for the Knowledge Revolution,” funded by the Canadian Federal Government under the Apogée Programme and scientific co-director of IVADO, the Montréal Institute for Data Valorization.
Illya V. Hicks 10:30am-11:30am on January 24, 2022
Abstract: Combinatorial optimization techniques have been among the most powerful tools for solving a wide range of optimization problems. Given the recent developments in quantum computers and the emerging claims on “quantum advantage”, we explore the boundaries of classical and quantum machines in “solving” a particular hard combinatorial optimization problem. We also explain the importance of quantum-inspired modeling and ways to handle large-scale instances of the problem. Finally, we will also explore mixed integer programming models for designing optimal binary decision trees and how a problem related to matrix completion can be viewed from a matroidal perspective.
Bio: Illya V. Hicks is the professor and chair of the Computational and Applied Mathematics Department at Rice University. He received a BS in mathematics (1995) from Texas State University and PhD in Computational and Applied Mathematics (2000) from Rice University. Prior to coming back to Rice, Illya was as faculty member in the Industrial and Systems Engineering Department at Texas A&M University (2000-2006). In terms of research, his interests are in combinatorial optimization, graph theory, and integer programming with applications in big data, quantum computing, imaging, social networks, logistics, and social good. Illya is the recipient of the 2005 Optimization Prize for Young Researchers from the Optimization Society of INFORMS and the 2010 Forum Moving Spirit Award from INFORMS for his work with the Minority Issues Forum of INFORMS. Illya was also named an INFORMS Fellow in 2020.
Bistra Dilkina 2:30pm-3:30pm on January 24, 2022
Abstract: Machine learning components commonly appear in larger decision-making pipelines; however, the model training process typically focuses only on a loss that measures accuracy between predicted values and ground truth values. Instead, ‘predict-and-optimize’ or 'decision-focused learning' approaches explicitly integrate the downstream decision problem when training the predictive model, in order to optimize the quality of decisions induced by the predictions. This tutorial will present recent advances on the synthesis of the machine learning and combinatorial solving components in end-to-end learning architectures that have been shown to bring significant improvements in decision quality over the siloed approach across a variety of problem types.
Bio: Bistra Dilkina is an Associate Professor of Computer Science at the University of Southern California. She is also the co-Director of the USC Center for AI in Society (CAIS). Her work spans discrete optimization, network design, and machine learning, with a strong focus on the close integration of combinatorial decision making and machine learning. Dilkina is one of the junior faculty leaders in the young field of Computational Sustainability, has led research on AI for wildlife conservation and disaster resilience, and has co-organized workshops, tutorials, special tracks at major conferences on Computational Sustainability, AI for Social Impact and related subareas. Prior to USC, Dilkina was an Assistant Professor in the College of Computing at the Georgia Institute of Technology and a co-director of the Data Science for Social Good Atlanta summer program. She received her PhD from Cornell University in 2012, and was a Post-Doctoral Associate at the Institute for Computational Sustainability until 2013.
Ojas D. Parekh 10:30am-11:30am on January 25, 2022
Abstract: Quantum computing offers the tantalizing potential of exponential resource advantages over classical computation, yet harnessing this potential in solving optimization problems remains largely untapped. We will explore possible interpretations of the phrase “Quantum Discrete Optimization,” with an eye to how classical and quantum perspectives and techniques might inspire one another. We will discuss recent prospects for quantum advantages in solving classical discrete optimization problems as well as connections between fundamental problems in discrete optimization, such as partitioning graphs, and fundamental problems in quantum physics, such as approximating ground states of local Hamiltonians.
Bio: Ojas Parekh is a Principal Member of Technical Staff in the Center for Computing Research at Sandia National Laboratories. Parekh holds a Ph.D. in Algorithms, Combinatorics, and Optimization from Carnegie Mellon University, thrives in interdisciplinary work, and has published in a variety of fields including discrete optimization, theoretical computer science, combinatorics, combinatorial scientific computing, and most recently, quantum and neuromorphic computing. He has designed approximation algorithms, with best-known performance guarantees, for a variety of discrete optimization problems and is helping shape the emerging field of quantum approximation algorithms. Parekh is the director of the Department of Energy’s Fundamental Algorithmic Research for Quantum Computing (FAR-QC) project, a multi-institutional effort tasked with designing novel quantum algorithms to realize resource advantages over classical computation, especially for optimization, simulation, and machine learning.