22 research outputs found
Multi-Task Predict-then-Optimize
The predict-then-optimize framework arises in a wide variety of applications
where the unknown cost coefficients of an optimization problem are first
predicted based on contextual features and then used to solve the problem. In
this work, we extend the predict-then-optimize framework to a multi-task
setting: contextual features must be used to predict cost coefficients of
multiple optimization problems, possibly with different feasible regions,
simultaneously. For instance, in a vehicle dispatch/routing application,
features such as time-of-day, traffic, and weather must be used to predict
travel times on the edges of a road network for multiple traveling salesperson
problems that span different target locations and multiple s-t shortest path
problems with different source-target pairs. We propose a set of methods for
this setting, with the most sophisticated one drawing on advances in multi-task
deep learning that enable information sharing between tasks for improved
learning, particularly in the small-data regime. Our experiments demonstrate
that multi-task predict-then-optimize methods provide good tradeoffs in
performance among different tasks, particularly with less training data and
more tasks
Regularization of persistent homology gradient computation
Persistent homology is a method for computing the topological features present in a given data. Recently, there has been much interest in the integration of persistent homology as a computational step in neural networks or deep learning. In order for a given computation to be integrated in such a way, the computation in question must be differentiable. Computing the gradients of persistent homology is an ill-posed inverse problem with infinitely many solutions. Consequently, it is important to perform regularization so that the solution obtained agrees with known priors. In this work we propose a novel method for regularizing persistent homology gradient computation through the addition of a grouping term. This has the effect of helping to ensure gradients are defined with respect to larger entities and not individual points
Regularization of persistent homology gradient computation
Persistent homology is a method for computing the topological features present in a given data. Recently, there has been much interest in the integration of persistent homology as a computational step in neural networks or deep learning. In order for a given computation to be integrated in such a way, the computation in question must be differentiable. Computing the gradients of persistent homology is an ill-posed inverse problem with infinitely many solutions. Consequently, it is important to perform regularization so that the solution obtained agrees with known priors. In this work we propose a novel method for regularizing persistent homology gradient computation through the addition of a grouping term. This has the effect of helping to ensure gradients are defined with respect to larger entities and not individual points