16,956 research outputs found

    Online Distributed Sensor Selection

    Full text link
    A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks

    Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search

    Full text link
    In search applications, autonomous unmanned vehicles must be able to efficiently reacquire and localize mobile targets that can remain out of view for long periods of time in large spaces. As such, all available information sources must be actively leveraged -- including imprecise but readily available semantic observations provided by humans. To achieve this, this work develops and validates a novel collaborative human-machine sensing solution for dynamic target search. Our approach uses continuous partially observable Markov decision process (CPOMDP) planning to generate vehicle trajectories that optimally exploit imperfect detection data from onboard sensors, as well as semantic natural language observations that can be specifically requested from human sensors. The key innovation is a scalable hierarchical Gaussian mixture model formulation for efficiently solving CPOMDPs with semantic observations in continuous dynamic state spaces. The approach is demonstrated and validated with a real human-robot team engaged in dynamic indoor target search and capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference (Cambridge, UK, July 2018

    Distributed Detection in Sensor Networks with Limited Range Sensors

    Full text link
    We consider a multi-object detection problem over a sensor network (SNET) with limited range sensors. This problem complements the widely considered decentralized detection problem where all sensors observe the same object. While the necessity for global collaboration is clear in the decentralized detection problem, the benefits of collaboration with limited range sensors is unclear and has not been widely explored. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR). We first extend the FDR procedure and develop a transformation that exploits complete or partial knowledge of either the observed distributions at each sensor or the ensemble (mixture) distribution across all sensors. We then show that this transformation applies to multi-dimensional observations, thus extending FDR to multi-dimensional settings. We also extend FDR theory to cases where distributions under both null and positive hypotheses are uncertain. We then propose a robust distributed algorithm to perform detection. We further demonstrate scalability to large SNETs by showing that the upper bound on the communication complexity scales linearly with the number of sensors that are in the vicinity of objects and is independent of the total number of sensors. Finally, we deal with situations where the sensing model may be uncertain and establish robustness of our techniques to such uncertainties.Comment: Submitted to IEEE Transactions on Signal Processin

    Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena

    Get PDF
    The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots. This paper presents a decentralized data fusion and active sensing (D2FAS) algorithm for mobile sensors to actively explore the road network to gather and assimilate the most informative data for predicting the traffic phenomenon. We analyze the time and communication complexity of D2FAS and demonstrate that it can scale well with a large number of observations and sensors. We provide a theoretical guarantee on its predictive performance to be equivalent to that of a sophisticated centralized sparse approximation for the Gaussian process (GP) model: The computation of such a sparse approximate GP model can thus be parallelized and distributed among the mobile sensors (in a Google-like MapReduce paradigm), thereby achieving efficient and scalable prediction. We also theoretically guarantee its active sensing performance that improves under various practical environmental conditions. Empirical evaluation on real-world urban road network data shows that our D2FAS algorithm is significantly more time-efficient and scalable than state-of-the-art centralized algorithms while achieving comparable predictive performance.Comment: 28th Conference on Uncertainty in Artificial Intelligence (UAI 2012), Extended version with proofs, 13 page

    Supporting Device Discovery and Spontaneous Interaction with Spatial References

    Get PDF
    The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection
    corecore