4,720 research outputs found

    High-Quality Seamless Panoramic Images

    Get PDF

    Layered Interpretation of Street View Images

    Full text link
    We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.Comment: The paper will be presented in the 2015 Robotics: Science and Systems Conference (RSS

    Robust auto tool change for industrial robots using visual servoing

    Full text link
    This is an Author's Accepted Manuscript of an article published in Muñoz-Benavent, Pau, Solanes Galbis, Juan Ernesto, Gracia Calandin, Luis Ignacio, Tornero Montserrat, Josep. (2019). Robust auto tool change for industrial robots using visual servoing.International Journal of Systems Science, 50, 2, 432-449. © Taylor & Francis, available online at: http://doi.org/10.1080/00207721.2018.1562129[EN] This work presents an automated solution for tool changing in industrial robots using visual servoing and sliding mode control. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by a vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritised level to satisfy the constraints typically present in a robot system: joint range limits, maximum joint speeds and allowed workspace. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. The feasibility and effectiveness of the proposed approach is substantiated by simulation results for a complex 3D case study. Moreover, real experimentation with a 6R industrial manipulator is also presented to demonstrate the applicability of the method for tool changing.This work was supported in part by the Ministerio de Economia, Industria y Competitividad, Gobierno de Espana under Grant BES-2010-038486 and Project DPI2017-87656-C2-1-R.Muñoz-Benavent, P.; Solanes Galbis, JE.; Gracia Calandin, LI.; Tornero Montserrat, J. (2019). Robust auto tool change for industrial robots using visual servoing. International Journal of Systems Science. 50(2):432-449. https://doi.org/10.1080/00207721.2018.1562129S43244950

    Improving Real-World Performance of Vision Aided Navigation in a Flight Environment

    Get PDF
    The motivation of this research is to fuse information from an airborne imaging sensor with information extracted from satellite imagery in order to provide accurate position when GPS is unavailable for an extended duration. A corpus of existing geo-referenced satellite imagery is used to create a key point database. A novel algorithm for recovering coarse pose using by comparing key points extracted from the airborne imagery to the reference database is developed. This coarse position is used to bootstrap a local-area geo-registration algorithm, which provides GPS-level position estimates. This research derives optimizations for existing local-area methods for operation in flight environments

    Vanishing point detection for visual surveillance systems in railway platform environments

    Get PDF
    © 2018 Elsevier B.V. Visual surveillance is of paramount importance in public spaces and especially in train and metro platforms which are particularly susceptible to many types of crime from petty theft to terrorist activity. Image resolution of visual surveillance systems is limited by a trade-off between several requirements such as sensor and lens cost, transmission bandwidth and storage space. When image quality cannot be improved using high-resolution sensors, high-end lenses or IR illumination, the visual surveillance system may need to increase the resolving power of the images by software to provide accurate outputs such as, in our case, vanishing points (VPs). Despite having numerous applications in camera calibration, 3D reconstruction and threat detection, a general method for VP detection has remained elusive. Rather than attempting the infeasible task of VP detection in general scenes, this paper presents a novel method that is fine-tuned to work for railway station environments and is shown to outperform the state-of-the-art for that particular case. In this paper, we propose a three-stage approach to accurately detect the main lines and vanishing points in low-resolution images acquired by visual surveillance systems in indoor and outdoor railway platform environments. First, several frames are used to increase the resolving power through a multi-frame image enhancer. Second, an adaptive edge detection is performed and a novel line clustering algorithm is then applied to determine the parameters of the lines that converge at VPs; this is based on statistics of the detected lines and heuristics about the type of scene. Finally, vanishing points are computed via a voting system to optimize detection in an attempt to omit spurious lines. The proposed approach is very robust since it is not affected by ever-changing illumination and weather conditions of the scene, and it is immune to vibrations. Accurate and reliable vanishing point detection provides very valuable information, which can be used to aid camera calibration, automatic scene understanding, scene segmentation, semantic classification or augmented reality in platform environments

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Joint kinetic determinants of starting block performance in athletic sprinting

    Get PDF
    The aim of this study was to explore the relationships between lower limb joint kinetics, external force production and starting block performance (normalised average horizontal power, NAHP). Seventeen male sprinters (100 m PB, 10.67 ± 0.32 s) performed maximal block starts from instrumented starting blocks (1000 Hz) whilst 3D kinematics (250 Hz) were also recorded during the block phase. Ankle, knee and hip resultant joint moment and power were calculated at the rear and front leg using inverse dynamics. Average horizontal force applied to the front (r = 0.46) and rear (r = 0.44) block explained 86% of the variance in NAHP. At the joint level, many “very likely” to “almost certain” relationships (r = 0.57 to 0.83) were found between joint kinetic data and the magnitude of horizontal force applied to each block although stepwise multiple regression revealed that 55% of the variance in NAHP was accounted for by rear ankle moment, front hip moment and front knee power. The current study provides novel insight into starting block performance and the relationships between lower limb joint kinetic and external kinetic data that can help inform physical and technical training practices for this skill.<br/
    • …
    corecore