9,777 research outputs found

    Resilient Active Target Tracking with Multiple Robots

    Full text link
    The problem of target tracking with multiple robots consists of actively planning the motion of the robots to track the targets. A major challenge for practical deployments is to make the robots resilient to failures. In particular, robots may be attacked in adversarial scenarios, or their sensors may fail or get occluded. In this paper, we introduce planning algorithms for multi-target tracking that are resilient to such failures. In general, resilient target tracking is computationally hard. Contrary to the case where there are no failures, no scalable approximation algorithms are known for resilient target tracking when the targets are indistinguishable, or unknown in number, or with unknown motion model. In this paper we provide the first such algorithm, that also has the following properties: First, it achieves maximal resiliency, since the algorithm is valid for any number of failures. Second, it is scalable, as our algorithm terminates with the same running time as state-of-the-art algorithms for (non-resilient) target tracking. Third, it provides provable approximation bounds on the tracking performance, since our algorithm guarantees a solution that is guaranteed to be close to the optimal. We quantify our algorithm's approximation performance using a novel notion of curvature for monotone set functions subject to matroid constraints. Finally, we demonstrate the efficacy of our algorithm through MATLAB and Gazebo simulations, and a sensitivity analysis; we focus on scenarios that involve a known number of distinguishable targets

    Path Planning in Dynamic Environments with Adaptive Dimensionality

    Full text link
    Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic.Comment: Accepted in SoCS 201

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    Generic Multiview Visual Tracking

    Full text link
    Recent progresses in visual tracking have greatly improved the tracking performance. However, challenges such as occlusion and view change remain obstacles in real world deployment. A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e.g. human), static cameras, and/or camera calibration. To break through these limitations, we propose a generic multiview tracking (GMT) framework that allows camera movement, while requiring neither specific object model nor camera calibration. A key innovation in our framework is a cross-camera trajectory prediction network (TPN), which implicitly and dynamically encodes camera geometric relations, and hence addresses missing target issues such as occlusion. Moreover, during tracking, we assemble information across different cameras to dynamically update a novel collaborative correlation filter (CCF), which is shared among cameras to achieve robustness against view change. The two components are integrated into a correlation filter tracking framework, where the features are trained offline using existing single view tracking datasets. For evaluation, we first contribute a new generic multiview tracking dataset (GMTD) with careful annotations, and then run experiments on GMTD and the PETS2009 datasets. On both datasets, the proposed GMT algorithm shows clear advantages over state-of-the-art ones

    Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network

    Full text link
    We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN). Given a CNN pre-trained on a large-scale image repository in offline, our algorithm takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems. The features are used to learn discriminative target appearance models using an online Support Vector Machine (SVM). In addition, we construct target-specific saliency map by backpropagating CNN features with guidance of the SVM, and obtain the final tracking result in each frame based on the appearance model generatively constructed with the saliency map. Since the saliency map visualizes spatial configuration of target effectively, it improves target localization accuracy and enable us to achieve pixel-level target segmentation. We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms

    Real-Time Area Coverage and Target Localization using Receding-Horizon Ergodic Exploration

    Full text link
    Although a number of solutions exist for the problems of coverage, search and target localization---commonly addressed separately---whether there exists a unified strategy that addresses these objectives in a coherent manner without being application-specific remains a largely open research question. In this paper, we develop a receding-horizon ergodic control approach, based on hybrid systems theory, that has the potential to fill this gap. The nonlinear model predictive control algorithm plans real-time motions that optimally improve ergodicity with respect to a distribution defined by the expected information density across the sensing domain. We establish a theoretical framework for global stability guarantees with respect to a distribution. Moreover, the approach is distributable across multiple agents, so that each agent can independently compute its own control while sharing statistics of its coverage across a communication network. We demonstrate the method in both simulation and in experiment in the context of target localization, illustrating that the algorithm is independent of the number of targets being tracked and can be run in real-time on computationally limited hardware platforms.Comment: 18 page

    FLORIS and CLORIS: Hybrid Source and Network Localization Based on Ranges and Video

    Full text link
    We propose hybrid methods for localization in wireless sensor networks fusing noisy range measurements with angular information (extracted from video). Compared with conventional methods that rely on a single sensed variable, this may pave the way for improved localization accuracy and robustness. We address both the single-source and network (i.e., cooperative multiple-source) localization paradigms, solving them via optimization of a convex surrogate. The formulations for hybrid localization are unified in the sense that we propose a single nonlinear least-squares cost function, fusing both angular and range measurements. We then relax the problem to obtain an estimate of the optimal positions. This contrasts with other hybrid approaches that alternate the execution of localization algorithms for each type of measurement separately, to progressively refine the position estimates. Single-source localization uses a semidefinite relaxation to obtain a one-shot matrix solution from which the source position is derived via factorization. Network localization uses a different approach where sensor coordinates are retained as optimization variables, and the relaxed cost function is efficiently minimized using fast iterations based on Nesterov's optimal method. Further, an automated calibration procedure is developed to express range and angular information, obtained by different devices, possibly deployed at different locations, in a single consistent coordinate system. This drastically reduces the need for manual calibration that would otherwise negatively impact the practical usability of hybrid range/video localization systems. We develop and test, both in simulation and experimentally, the new hybrid localization algorithms, which not only overcome the limitations of previous fusing approaches but also compare favorably to state-of-the-art methods, outperforming them in some scenarios

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin
    corecore