36 research outputs found
RecycleNet: Latent Feature Recycling Leads to Iterative Decision Refinement
Despite the remarkable success of deep learning systems over the last decade,
a key difference still remains between neural network and human
decision-making: As humans, we cannot only form a decision on the spot, but
also ponder, revisiting an initial guess from different angles, distilling
relevant information, arriving at a better decision. Here, we propose
RecycleNet, a latent feature recycling method, instilling the pondering
capability for neural networks to refine initial decisions over a number of
recycling steps, where outputs are fed back into earlier network layers in an
iterative fashion. This approach makes minimal assumptions about the neural
network architecture and thus can be implemented in a wide variety of contexts.
Using medical image segmentation as the evaluation environment, we show that
latent feature recycling enables the network to iteratively refine initial
predictions even beyond the iterations seen during training, converging towards
an improved decision. We evaluate this across a variety of segmentation
benchmarks and show consistent improvements even compared with top-performing
segmentation methods. This allows trading increased computation time for
improved performance, which can be beneficial, especially for safety-critical
applications.Comment: Accepted at 2024 Winter Conference on Applications of Computer Vision
(WACV
Quick Search for Rare Events
Rare events can potentially occur in many applications. When manifested as
opportunities to be exploited, risks to be ameliorated, or certain features to
be extracted, such events become of paramount significance. Due to their
sporadic nature, the information-bearing signals associated with rare events
often lie in a large set of irrelevant signals and are not easily accessible.
This paper provides a statistical framework for detecting such events so that
an optimal balance between detection reliability and agility, as two opposing
performance measures, is established. The core component of this framework is a
sampling procedure that adaptively and quickly focuses the
information-gathering resources on the segments of the dataset that bear the
information pertinent to the rare events. Particular focus is placed on
Gaussian signals with the aim of detecting signals with rare mean and variance
values
Grounding deep models of visual data
Deep models are state-of-the-art for many computer vision tasks including object classification, action recognition, and captioning. As Artificial Intelligence systems that utilize deep models are becoming ubiquitous, it is also becoming crucial to explain why they make certain decisions: Grounding model decisions. In this thesis, we study: 1) Improving Model Classification. We show that by utilizing web action images along with videos in training for action recognition, significant performance boosts of convolutional models can be achieved. Without explicit grounding, labeled web action images tend to contain discriminative action poses, which highlight discriminative portions of a video’s temporal progression. 2) Spatial Grounding. We visualize spatial evidence of deep model predictions using a discriminative top-down attention mechanism, called Excitation Backprop. We show how such visualizations are equally informative for correct and incorrect model predictions, and highlight the shift of focus when different training strategies are adopted. 3) Spatial Grounding for Improving Model Classification at Training Time. We propose a guided dropout regularizer for deep networks based on the evidence of a network prediction. This approach penalizes neurons that are most relevant for model prediction. By dropping such high-saliency neurons, the network is forced to learn alternative paths in order to maintain loss minimization. We demonstrate better generalization ability, an increased utilization of network neurons, and a higher resilience to network compression. 4) Spatial Grounding for Improving Model Classification at Test Time. We propose Guided Zoom, an approach that utilizes spatial grounding to make more informed predictions at test time. Guided Zoom compares the evidence used to make a preliminary decision with the evidence of correctly classified training examples to ensure evidenceprediction consistency, otherwise refines the prediction. We demonstrate accuracy gains for fine-grained classification. 5) Spatiotemporal Grounding. We devise a formulation that simultaneously grounds evidence in space and time, in a single pass, using top-down saliency. We visualize the spatiotemporal cues that contribute to a deep recurrent neural network’s classification/captioning output. Based on these spatiotemporal cues, we are able to localize segments within a video that correspond with a specific action, or phrase from a caption, without explicitly optimizing/training for these tasks
Integrating IoT Analytics into Marketing Decision Making: A Smart Data-Driven Approach
With the advent of the Internet of Things (IoT), businesses have gained access to vast amounts of data generated by interconnected devices. Leveraging IoT analytics and marketing intelligence, organizations can extract valuable insights from this data to enhance decision-making processes. This paper presents a comprehensive methodology for data-driven decision-making in the context of IoT analytics and marketing intelligence. A real-time example is used to illustrate the application of this methodology, followed by an inference and discussion of the results. The rise of IoT has enabled real-time data collection from a wide array of interconnected devices, offering unprecedented opportunities for businesses to gain actionable insights. This paper focuses on the intersection of IoT analytics and marketing intelligence, exploring how data-driven decision-making can empower organizations to optimize their marketing strategies, customer experiences, and overall business performance