41,585 research outputs found
Robust Deep Multi-Modal Sensor Fusion using Fusion Weight Regularization and Target Learning
Sensor fusion has wide applications in many domains including health care and
autonomous systems. While the advent of deep learning has enabled promising
multi-modal fusion of high-level features and end-to-end sensor fusion
solutions, existing deep learning based sensor fusion techniques including deep
gating architectures are not always resilient, leading to the issue of fusion
weight inconsistency. We propose deep multi-modal sensor fusion architectures
with enhanced robustness particularly under the presence of sensor failures. At
the core of our gating architectures are fusion weight regularization and
fusion target learning operating on auxiliary unimodal sensing networks
appended to the main fusion model. The proposed regularized gating
architectures outperform the existing deep learning architectures with and
without gating under both clean and corrupted sensory inputs resulted from
sensor failures. The demonstrated improvements are particularly pronounced when
one or more multiple sensory modalities are corrupted.Comment: 8 page
DL Multi-sensor information fusion service selective information scheme for improving the Internet of Things based user responses
Multi-sensor information fusion aids different services to meet the application requirements through independent and joint data assimilation. The role of multiple sensors in smart connected applications helps to improve their efficiency regardless of the users. However, the assimilation of different information is subject to resource and time constraints at the time of application response. This results in partial fulfillment of the application services, and hence, this article introduces a service selective information fusion processing (SSIFP) scheme. The proposed scheme identifies service-specific sensor information for satisfying the application service demands. The identification process is eased with deep recurrent learning in determining the level of sensor information fusion. This level identification reduces the unavailability of services (resource constraint) and delays in application services (time constraint). Through this identification, the applications\u27 precise demands are detected, and selective fusion is performed to mitigate the issues above. The proposed system\u27s performance is verified using the metrics delay, fusion rate, service loss, and backlogs
Context-aware Collaborative Neuro-Symbolic Inference in Internet of Battlefield Things
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps
Optimized Gated Deep Learning Architectures for Sensor Fusion
Sensor fusion is a key technology that integrates various sensory inputs to
allow for robust decision making in many applications such as autonomous
driving and robot control. Deep neural networks have been adopted for sensor
fusion in a body of recent studies. Among these, the so-called netgated
architecture was proposed, which has demonstrated improved performances over
the conventional convolutional neural networks (CNN). In this paper, we address
several limitations of the baseline negated architecture by proposing two
further optimized architectures: a coarser-grained gated architecture employing
(feature) group-level fusion weights and a two-stage gated architectures
leveraging both the group-level and feature level fusion weights. Using driving
mode prediction and human activity recognition datasets, we demonstrate the
significant performance improvements brought by the proposed gated
architectures and also their robustness in the presence of sensor noise and
failures.Comment: 10 pages, 5 figures. Submitted to ICLR 201
- …