1,529 research outputs found

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Full text link
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    Vision technology/algorithms for space robotics applications

    Get PDF
    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed

    Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

    Get PDF
    The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ā„“2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ā„“2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches

    Tracking Multiple Vehicles Using a Variational Radar Model

    Full text link
    High-resolution radar sensors are able to resolve multiple detections per object and therefore provide valuable information for vehicle environment perception. For instance, multiple detections allow to infer the size of an object or to more precisely measure the object's motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple detections for an object are necessary and measurement-to-object associations become more complex. This paper presents a new variational radar model for tracking vehicles using radar detections and demonstrates how this model can be incorporated into a Random-Finite-Set-based multi-object filter. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that the data-driven measurement model outperforms a manually designed model.Comment: This is a preprint (i.e. the accepted version) of: A. Scheel and K. Dietmayer, "Tracking Multiple Vehicles Using a Variational Radar Model," in IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3721-3736, 2019. Digital Object Identifier 10.1109/TITS.2018.287904

    A survey of advances in vision-based vehicle re-identification

    Full text link
    Vehicle re-identification (V-reID) has become significantly popular in the community due to its applications and research significance. In particular, the V-reID is an important problem that still faces numerous open challenges. This paper reviews different V-reID methods including sensor based methods, hybrid methods, and vision based methods which are further categorized into hand-crafted feature based methods and deep feature based methods. The vision based methods make the V-reID problem particularly interesting, and our review systematically addresses and evaluates these methods for the first time. We conduct experiments on four comprehensive benchmark datasets and compare the performances of recent hand-crafted feature based methods and deep feature based methods. We present the detail analysis of these methods in terms of mean average precision (mAP) and cumulative matching curve (CMC). These analyses provide objective insight into the strengths and weaknesses of these methods. We also provide the details of different V-reID datasets and critically discuss the challenges and future trends of V-reID methods.Comment: 17 pages; 21 figures; journal pape

    Mini-Unmanned Aerial Vehicle-Based Remote Sensing: Techniques, Applications, and Prospects

    Full text link
    The past few decades have witnessed the great progress of unmanned aircraft vehicles (UAVs) in civilian fields, especially in photogrammetry and remote sensing. In contrast with the platforms of manned aircraft and satellite, the UAV platform holds many promising characteristics: flexibility, efficiency, high-spatial/temporal resolution, low cost, easy operation, etc., which make it an effective complement to other remote-sensing platforms and a cost-effective means for remote sensing. Considering the popularity and expansion of UAV-based remote sensing in recent years, this paper provides a systematic survey on the recent advances and future prospectives of UAVs in the remote-sensing community. Specifically, the main challenges and key technologies of remote-sensing data processing based on UAVs are discussed and summarized firstly. Then, we provide an overview of the widespread applications of UAVs in remote sensing. Finally, some prospects for future work are discussed. We hope this paper will provide remote-sensing researchers an overall picture of recent UAV-based remote sensing developments and help guide the further research on this topic

    Point clouds by SLAM-based mobile mapping systems: accuracy and geometric content validation in multisensor survey and stand-alone acquisition

    Get PDF
    The paper provides some operative replies to evaluate the effectiveness and the critical issues of the simultaneous localisation and mapping (SLAM)-based mobile mapping system (MMS) called ZEB by GeoSLAMā„¢ https://geoslam.com/technology/. In these last years, this type of handheld 3D mapping technology has increasingly developed the framework of portable solutions for close-range mapping systems that have mainly been devoted to mapping the indoor building spaces of enclosed or underground environments, such as forestry applications and tunnels or mines. The research introduces a set of test datasets related to the documentation of landscape contexts or the 3D modelling of architectural complexes. These datasets are used to validate the accuracy and informative content richness about ZEB point clouds in stand-alone solutions and in cases of combined applications of this technology with multisensor survey approaches. In detail, the proposed validation method follows the fulfilment of the endorsed approach by use of root mean square error (RMSE) evaluation and deviation analysis assessment of point clouds between SLAM-based data and 3D point cloud surfaces computed by more precise measurement methods to evaluate the accuracy of the proposed approach. Furthermore, this study specifies the suitable scale for possible handlings about these peculiar point clouds and uses the profile extraction method in addition to feature analyses such as corner and plane deviation analysis of architectural elements. Finally, because of the experiences reported in the literature and performed in this work, a possible reversal is suggested. If in the 2000s, most studies focused on intelligently reducing the light detection and ranging (LiDAR) point clouds where they presented redundant and not useful information, contrariwise, in this sense, the use of MMS methods is proposed to be firstly considered and then to increase the information only wherever needed with more accurate high-scale methods

    Visual end-effector tracking using a 3D model-aided particle filter for humanoid robot platforms

    Full text link
    This paper addresses recursive markerless estimation of a robot's end-effector using visual observations from its cameras. The problem is formulated into the Bayesian framework and addressed using Sequential Monte Carlo (SMC) filtering. We use a 3D rendering engine and Computer Aided Design (CAD) schematics of the robot to virtually create images from the robot's camera viewpoints. These images are then used to extract information and estimate the pose of the end-effector. To this aim, we developed a particle filter for estimating the position and orientation of the robot's end-effector using the Histogram of Oriented Gradient (HOG) descriptors to capture robust characteristic features of shapes in both cameras and rendered images. We implemented the algorithm on the iCub humanoid robot and employed it in a closed-loop reaching scenario. We demonstrate that the tracking is robust to clutter, allows compensating for errors in the robot kinematics and servoing the arm in closed loop using vision

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure
    • ā€¦
    corecore