504 research outputs found
Balancing Religion with Animal Rights Laws: How the Legal World Has Addressed the Topic of Animal Sacrifice
Visual Chunking: A List Prediction Framework for Region-Based Object Detection
We consider detecting objects in an image by iteratively selecting from a set
of arbitrarily shaped candidate regions. Our generic approach, which we term
visual chunking, reasons about the locations of multiple object instances in an
image while expressively describing object boundaries. We design an
optimization criterion for measuring the performance of a list of such
detections as a natural extension to a common per-instance metric. We present
an efficient algorithm with provable performance for building a high-quality
list of detections from any candidate set of region-based proposals. We also
develop a simple class-specific algorithm to generate a candidate region
instance in near-linear time in the number of low-level superpixels that
outperforms other region generating methods. In order to make predictions on
novel images at testing time without access to ground truth, we develop
learning approaches to emulate these algorithms' behaviors. We demonstrate that
our new approach outperforms sophisticated baselines on benchmark datasets.Comment: to appear at ICRA 201
Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing
This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation
- …