8 research outputs found
Submodular Rank Aggregation on Score-based Permutations for Distributed Automatic Speech Recognition
Distributed automatic speech recognition (ASR) requires to aggregate outputs
of distributed deep neural network (DNN)-based models. This work studies the
use of submodular functions to design a rank aggregation on score-based
permutations, which can be used for distributed ASR systems in both supervised
and unsupervised modes. Specifically, we compose an aggregation rank function
based on the Lovasz Bregman divergence for setting up linear structured convex
and nested structured concave functions. The algorithm is based on stochastic
gradient descent (SGD) and can obtain well-trained aggregation models. Our
experiments on the distributed ASR system show that the submodular rank
aggregation can obtain higher speech recognition accuracy than traditional
aggregation methods like Adaboost. Code is available
online~\footnote{https://github.com/uwjunqi/Subrank}.Comment: Accepted to ICASSP 2020. Please download the pdf to view Figure 1.
arXiv admin note: substantial text overlap with arXiv:1707.0116
The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses
Learning with non-modular losses is an important problem when sets of
predictions are made simultaneously. The main tools for constructing convex
surrogate loss functions for set prediction are margin rescaling and slack
rescaling. In this work, we show that these strategies lead to tight convex
surrogates iff the underlying loss function is increasing in the number of
incorrect predictions. However, gradient or cutting-plane computation for these
functions is NP-hard for non-supermodular loss functions. We propose instead a
novel surrogate loss function for submodular losses, the Lov\'asz hinge, which
leads to O(p log p) complexity with O(p) oracle accesses to the loss function
to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is
convex and yields an extension. As a result, we have developed the first
tractable convex surrogates in the literature for submodular losses. We
demonstrate the utility of this novel convex surrogate through several set
prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum