4,001 research outputs found

    Robust Environmental Mapping by Mobile Sensor Networks

    Full text link
    Constructing a spatial map of environmental parameters is a crucial step to preventing hazardous chemical leakages, forest fires, or while estimating a spatially distributed physical quantities such as terrain elevation. Although prior methods can do such mapping tasks efficiently via dispatching a group of autonomous agents, they are unable to ensure satisfactory convergence to the underlying ground truth distribution in a decentralized manner when any of the agents fail. Since the types of agents utilized to perform such mapping are typically inexpensive and prone to failure, this results in poor overall mapping performance in real-world applications, which can in certain cases endanger human safety. This paper presents a Bayesian approach for robust spatial mapping of environmental parameters by deploying a group of mobile robots capable of ad-hoc communication equipped with short-range sensors in the presence of hardware failures. Our approach first utilizes a variant of the Voronoi diagram to partition the region to be mapped into disjoint regions that are each associated with at least one robot. These robots are then deployed in a decentralized manner to maximize the likelihood that at least one robot detects every target in their associated region despite a non-zero probability of failure. A suite of simulation results is presented to demonstrate the effectiveness and robustness of the proposed method when compared to existing techniques.Comment: accepted to icra 201

    Identifying unreliable sensors without a knowledge of the ground truth in deceptive environments

    Get PDF
    This paper deals with the extremely fascinating area of “fusing” the outputs of sensors without any knowledge of the ground truth. In an earlier paper, the present authors had recently pioneered a solution, by mapping it onto the fascinating paradox of trying to identify stochastic liars without any additional information about the truth. Even though that work was significant, it was constrained by the model in which we are living in a world where “the truth prevails over lying”. Couched in the terminology of Learning Automata (LA), this corresponds to the Environment (Since the Environment is treated as an entity in its own right, we choose to capitalize it, rather than refer to it as an “environment”, i.e., as an abstract concept.) being “Stochastically Informative”. However, as explained in the paper, solving the problem under the condition that the Environment is “Stochastically Decepti”, as opposed to informative, is far from trivial. In this paper, we provide a solution to the problem where the Environment is deceptive (We are not aware of any other solution to this problem (within this setting), and so we believe that our solution is both pioneering and novel.), i.e., when we are living in a world where “lying prevails over the truth”

    Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models

    Get PDF
    This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques

    On Solving the Problem of Identifying Unreliable Sensors Without a Knowledge of the Ground Truth: The Case of Stochastic Environments

    Get PDF
    The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying unreliable sensors (in a domain of reliable and unreliable ones) without any knowledge of the ground truth. This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counterintuitive in and of itself. One aspect of our contribution is to show how redundancy can be introduced, and how it can be effectively utilized in resolving this paradox. Legacy work and the reported literature (for example, in the so-called weighted majority algorithm) have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. Unfortunately, the fundamental assumption of revealing the ground truth cannot be always guaranteed (or even expected) in many real life scenarios. While some extensions of the Condorcet jury theorem [9] can lead to a probabilistic guarantee on the quality of the fused process, they do not provide a solution to the unreliable sensor identification problem. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth—as advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of learning automata (LA) so as to gradually learn the identity of the reliable and unreliable sensors. To achieve this, we resort to a team of LA, where a distinct automaton is associated with each sensor. The solution provided here has been subjected to rigorous experimental tests, and the results presented are, in our opinion, both novel and conclusive.NivĂ„

    Using neurophysiological signals that reflect cognitive or affective state: Six recommendations to avoid common pitfalls

    Get PDF
    Estimating cognitive or affective state from neurophysiological signals and designing applications that make use of this information requires expertise in many disciplines such as neurophysiology, machine learning, experimental psychology, and human factors. This makes it difficult to perform research that is strong in all its aspects as well as to judge a study or application on its merits. On the occasion of the special topic “Using neurophysiological signals that reflect cognitive or affective state” we here summarize often occurring pitfalls and recommendations on how to avoid them, both for authors (researchers) and readers. They relate to defining the state of interest, the neurophysiological processes that are expected to be involved in the state of interest, confounding factors, inadvertently “cheating” with classification analyses, insight on what underlies successful state estimation, and finally, the added value of neurophysiological measures in the context of an application. We hope that this paper will support the community in producing high quality studies and well-validated, useful applications

    ReLoc: Hybrid RSSI- and phase-based relative UHF-RFID tag localization with COTS devices

    Get PDF
    Radio frequency identification (RFID) technology brings tremendous advancements in the Industrial Internet of Things (IIoT), especially for smart inventory management, as it provides a fast and low-cost way of counting or positioning items in the warehouse. In the last decade, many novel solutions, including absolute and relative positioning methods, have been proposed for this application. However, the available methods are quite sensitive to the minor changes in the deployment scenario, including the orientation of the tag and antenna, the materials contained inside the carton, tag distortion, and multipath propagation. To this end, we propose a hybrid relative passive RFID localization method (ReLoc) based on both the received signal strength indicator (RSSI) and measured phases, which orders the RFID tags horizontally and vertically. In this article, the phase-based variant maximum likelihood estimation is proposed for lateral positioning, and the RSSI profiles of two tilted antennas are compared with each other for level distinguishing. We implement the proposed positioning system ReLoc with commercial off-the-shelf RFID devices. The experiment in a warehouse shows that ReLoc is a powerful solution for practical item-level inventory management. The experimental results show that ReLoc achieves an average lateral and level ordering accuracy of 94.6% and 94.3%, respectively. Notably, when considering liquid or metal materials inside the carton or tag distortion, ReLoc still performs excellently with more than 93% ordering accuracy both horizontally and vertically, indicating the robustness of the proposed method

    Improving Trust in Deep Neural Networks with Nearest Neighbors

    Get PDF
    Deep neural networks are used increasingly for perception and decision-making in UAVs. For example, they can be used to recognize objects from images and decide what actions the vehicle should take. While deep neural networks can perform very well at complex tasks, their decisions may be unintuitive to a human operator. When a human disagrees with a neural network prediction, due to the black box nature of deep neural networks, it can be unclear whether the system knows something the human does not or whether the system is malfunctioning. This uncertainty is problematic when it comes to ensuring safety. As a result, it is important to develop technologies for explaining neural network decisions for trust and safety. This paper explores a modification to the deep neural network classification layer to produce both a predicted label and an explanation to support its prediction. Specifically, at test time, we replace the final output layer of the neural network classifier by a k-nearest neighbor classifier. The nearest neighbor classifier produces 1) a predicted label through voting and 2) the nearest neighbors involved in the prediction, which represent the most similar examples from the training dataset. Because prediction and explanation are derived from the same underlying process, this approach guarantees that the explanations are always relevant to the predictions. We demonstrate the approach on a convolutional neural network for a UAV image classification task. We perform experiments using a forest trail image dataset and show empirically that the hybrid classifier can produce intuitive explanations without loss of predictive performance compared to the original neural network. We also show how the approach can be used to help identify potential issues in the network and training process

    SwellFit: Developing A Wearable Sensor for Monitoring Peripheral Edema

    Get PDF
    Peripheral edema is a swelling of the legs, feet, or hands due to the accumulation of excessive fluid in the tissues. For patients with some chronic diseases, peripheral edema is a crucial indicator of onset or exacerbation of the condition. Thus, early detection of peripheral edema is important for timely diagnosis of associated diseases. However, existing techniques for edema assessment are a subjective measurement for which a human operator estimates the amount of swelling using a tape measure or by pressing the swollen area with the tip of an index finger. As a systematic approach to assessing peripheral edema, we develop SwellFit, an experimental prototype of a novel wearable technology that monitors peripheral edema by tracking changes in ankle curvature. Through a series of proof-of-concept experiments, we demonstrate that SwellFit detects ankle swelling even in the presence of substantial noise in the raw sensor readings
    • 

    corecore