722 research outputs found

    Non Parametric Distributed Inference in Sensor Networks Using Box Particles Messages

    Get PDF
    This paper deals with the problem of inference in distributed systems where the probability model is stored in a distributed fashion. Graphical models provide powerful tools for modeling this kind of problems. Inspired by the box particle filter which combines interval analysis with particle filtering to solve temporal inference problems, this paper introduces a belief propagation-like message-passing algorithm that uses bounded error methods to solve the inference problem defined on an arbitrary graphical model. We show the theoretic derivation of the novel algorithm and we test its performance on the problem of calibration in wireless sensor networks. That is the positioning of a number of randomly deployed sensors, according to some reference defined by a set of anchor nodes for which the positions are known a priori. The new algorithm, while achieving a better or similar performance, offers impressive reduction of the information circulating in the network and the needed computation times

    Automated Inference of Cognitive Stress in-the-Wild

    Get PDF
    We aim to build technology that combines mobile sensing systems to automatically infer a person’s cognitive stress to provide better and continuous stress management support. Our main innovation is the use of low-cost mobile thermal camera integrated in smartphone or other devices to produce new stress measures. We have developed a robust mobile based tracking system that tracks a person’s breathing pattern by measuring temperature changes around a person’s nostrils region while the person is facing the smartphone. Stress levels are automatically assessed by capturing breathing pattern dynamics through a novel signature based on time and frequency values and using convolutional neural networks to reduce the need to hand craft higher level features. We are now extending the system to integrate multiple sensors (e.g., PPG and GSR) and behavioural information (context). The system is being also adapted to be applied in the context of industry workfloor within the EU H2020 HUMAN research project to support workers during stress inducing tasks. Evaluations are being conducted both in the laboratory and in-the-wild (e.g., industry workfloor)

    Task-Consistent Path Planning for Mobile 3D Printing

    Get PDF
    In this paper, we explore the problem of task-consistent path planning for printing-in-motion via Mobile Manipulators (MM). MM offer a potentially unlimited planar workspace and flexibility for print operations. However, most existing methods have only mobility to relocate an arm which then prints while stationary. In this paper we present a new fully autonomous path planning approach for mobile material deposition. We use a modified version of Rapidly-exploring Random Tree Star (RRT*) algorithm, which is informed by a constrained Inverse Reachability Map (IRM) to ensure task consistency. Collision avoidance and end-effector reachability are respected in our approach. Our method also detects when a print path cannot be completed in a single execution. In this case it will decompose the path into several segments and reposition the base accordingly

    Kalman Filter Tuning with Bayesian Optimization

    Get PDF
    Many state estimation algorithms must be tuned given the state space process and observation models, the process and observation noise parameters must be chosen. Conventional tuning approaches rely on heuristic hand-tuning or gradient-based optimization techniques to minimize a performance cost function. However, the relationship between tuned noise values and estimator performance is highly nonlinear and stochastic. Therefore, the tuning solutions can easily get trapped in local minima, which can lead to poor choices of noise parameters and suboptimal estimator performance. This paper describes how Bayesian Optimization (BO) can overcome these issues. BO poses optimization as a Bayesian search problem for a stochastic ``black box'' cost function, where the goal is to search the solution space to maximize the probability of improving the current best solution. As such, BO offers a principled approach to optimization-based estimator tuning in the presence of local minima and performance stochasticity. While extended Kalman filters (EKFs) are the main focus of this work, BO can be similarly used to tune other related state space filters. The method presented here uses performance metrics derived from normalized innovation squared (NIS) filter residuals obtained via sensor data, which renders knowledge of ground-truth states unnecessary. The robustness, accuracy, and reliability of BO-based tuning is illustrated on practical nonlinear state estimation problems,losed-loop aero-robotic control

    Time Dependence in Kalman Filter Tuning

    Get PDF
    In this paper, we propose an approach to address the problems with ambiguity in tuning the process and observation noises for a discrete-time linear Kalman filter. Conventional approaches to tuning (e.g. using normalized estimation error squared and covariance minimization) compute empirical measures of filter performance. The parameters are selected, either manually or by some kind of optimization algorithm, to maximize these measures of performance. However, there are two challenges with this approach. First, in theory, many of these measures do not guarantee a unique solution due to observability issues. Second, in practice, empirically computed statistical quantities can be very noisy due to a finite number of samples. We propose a method to overcome these limitations. Our method has two main parts to it. The first is to ensure that the tuning problem has a single unique solution. We achieve this by simultaneously tuning the filter over multiple different prediction intervals. Although this yields a unique solution, practical issues (such as sampling noise) mean that it cannot be directly applied. Therefore, we use Bayesian Optimization. This technique handles noisy data and the local minima that it introduces. We demonstrate our results in a reference example and demonstrate that we are able to obtain good results. We share the source code for the benefit of the community1

    Structured prediction of unobserved voxels from a single depth image

    Get PDF
    Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes

    NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning

    Get PDF
    Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable

    Approaches to address the Data Skew Problem in Federated Learning

    Get PDF
    A Federated Learning approach consists of creating an AI model from multiple data sources, without moving large amounts of data across to a central environment. Federated learning can be very useful in a tactical coalition environment, where data can be collected individually by each of the coalition partners, but network connectivity is inadequate to move the data to a central environment. However, such data collected is often dirty and imperfect. The data can be imbalanced, and in some cases, some classes can be completely missing from some coalition partners. Under these conditions, traditional approaches for federated learning can result in models that are highly inaccurate. In this paper, we propose approaches that can result in good machine learning models even in the environments where the data may be highly skewed, and study their performance under different environments

    Guidance and surroundings awareness in outdoor handheld augmented reality

    Get PDF
    Handheld and wearable devices are becoming ubiquitous in our lives and augmented reality technology is stepping out of the laboratory environment and becoming ready to be used by anyone with portable devices. The success of augmented reality applications for pedestrians depends on different factors including a reliable guidance system and preventing risks. We show that different guidance systems can be supplementary to provide directions to a point of interest and offer clues that help the user find the augmented data when they get close to the location they have to visit. We tested the helpfulness of a map with the points of interest, an image preview of the next point of interest to visit, and an arrow showing the direction to it. The results show that the effectiveness of these guidance systems depend on the distance to the point of interest and the accuracy of the data obtained from the Global Positioning System. We also measured the total time that participants spent looking at the screen, as well as the perceived elapsed time as a measurement of real world dissociation. Finally, we discuss preliminary results to minimize the risk of accidents when using augmented reality applications in an outdoor urban environment

    Misclassification Risk and Uncertainty Quantification in Deep Classifiers

    Get PDF
    In this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier’s predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty.We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors
    • 

    corecore