379 research outputs found

    PRELIMINARY EVALUATION OF HDR TONE MAPPING OPERATORS FOR CULTURAL HERITAGE

    Full text link
    [EN] The ability of High Dynamic Range (HDR) imaging to capture the full range of lighting in a scene has led to an increasing interest in its use for Cultural Heritage (CH) applications. Photogrammetric techniques allow the semi-automatic production of 3D models from a sequence of images. Current photogrammetric methods are not always effective in reconstructing objects under harsh lighting conditions, as significant geometric details may not have been captured accurately in under- and over-exposed regions of the images. HDR imaging offers the possibility to overcome this limitation. In this paper we evaluate four different HDR tone-mapping operators (TMOs) that have been used to convert raw HDR images into a format suitable for state-of-the-art photogrammetric algorithms, and in particular keypoint detection techniques. The evaluation criteria used are the number of keypoints and the number of valid matches achieved. The comparison considers two local and two global TMOs.Suma, R.; Stavropoulou, G.; Stathopoulou, E.; Van Gool, L.; Georgopoulos, A.; Chalmers, A. (2016). PRELIMINARY EVALUATION OF HDR TONE MAPPING OPERATORS FOR CULTURAL HERITAGE. En 8th International congress on archaeology, computer graphics, cultural heritage and innovation. Editorial Universitat Politècnica de València. 343-347. https://doi.org/10.4995/arqueologica8.2015.3582OCS34334

    Multiple layers of contrasted images for robust feature-based visual tracking

    Get PDF
    International audienceFeature-based SLAM (Simultaneous Localization and Mapping) techniques rely on low-level contrast information extracted from images to detect and track keypoints. This process is known to be sensitive to changes in illumination of the environment that can lead to tracking failures. This paper proposes a multi-layered image representation (MLI) that computes and stores different contrast-enhanced versions of an original image. Keypoint detection is performed on each layer, yielding better robustness to light changes. An optimization technique is also proposed to compute the best contrast enhancements to apply in each layer. Results demonstrate the benefits of MLI when using the main keypoint detectors from ORB, SIFT or SURF, and shows significant improvement in SLAM robustness

    Human Operator Tracking System for Safe Industrial Collaborative Robotics

    Get PDF
    With the advent of the Industry 4.0 paradigm, manufacturing is shifting from mass production towards customisable production lines. While robots excel at reliably executing repeating tasks in a fast and precise manner, they lack the now desired versatility of humans. Human-robot collaboration (HRC) seeks to address this issue by allowing human operators to work together with robots in close proximity, leveraging the strengths of both agents to increase adaptability and productivity. Safety is critical to user acceptance and the success of collaborative robots (cobots) and is thus a focus of research. Typical approaches provide the cobot with information such as operator pose estimates or higher-level motion predictions to facilitate adaptive planning of trajectory or action. Therefore, locating the operator in the shared workspace is a key feature. This dissertation seeks to kickstart the development of a human operator tracking system that provides a three-dimensional pose estimate and, in turn, ensures safety. State-of-the-art methods for human pose estimation in two-dimensional RGB images are tested with a custom dataset and evaluated. The results are then analysed considering real-time capability in the use case of a single operator performing industrial assembly tasks in a collaborative robotic cell equipped with a robotic arm. The resulting observations enable future work like fusion of depth information.With the advent of the Industry 4.0 paradigm, manufacturing is shifting from mass production towards customisable production lines. While robots excel at reliably executing repeating tasks in a fast and precise manner, they lack the now desired versatility of humans. Human-robot collaboration (HRC) seeks to address this issue by allowing human operators to work together with robots in close proximity, leveraging the strengths of both agents to increase adaptability and productivity. Safety is critical to user acceptance and the success of collaborative robots (cobots) and is thus a focus of research. Typical approaches provide the cobot with information such as operator pose estimates or higher-level motion predictions to facilitate adaptive planning of trajectory or action. Therefore, locating the operator in the shared workspace is a key feature. This dissertation seeks to kickstart the development of a human operator tracking system that provides a three-dimensional pose estimate and, in turn, ensures safety. State-of-the-art methods for human pose estimation in two-dimensional RGB images are tested with a custom dataset and evaluated. The results are then analysed considering real-time capability in the use case of a single operator performing industrial assembly tasks in a collaborative robotic cell equipped with a robotic arm. The resulting observations enable future work like fusion of depth information

    Real Time Stereo Cameras System Calibration Tool and Attitude and Pose Computation with Low Cost Cameras

    Get PDF
    The Engineering in autonomous systems has many strands. The area in which this work falls, the artificial vision, has become one of great interest in multiple contexts and focuses on robotics. This work seeks to address and overcome some real difficulties encountered when developing technologies with artificial vision systems which are, the calibration process and pose computation of robots in real-time. Initially, it aims to perform real-time camera intrinsic (3.2.1) and extrinsic (3.3) stereo camera systems calibration needed to the main goal of this work, the real-time pose (position and orientation) computation of an active coloured target with stereo vision systems. Designed to be intuitive, easy-to-use and able to run under real-time applications, this work was developed for use either with low-cost and easy-to-acquire or more complex and high resolution stereo vision systems in order to compute all the parameters inherent to this same system such as the intrinsic values of each one of the cameras and the extrinsic matrices computation between both cameras. More oriented towards the underwater environments, which are very dynamic and computationally more complex due to its particularities such as light reflections. The available calibration information, whether generated by this tool or loaded configurations from other tools allows, in a simplistic way, to proceed to the calibration of an environment colorspace and the detection parameters of a specific target with active visual markers (4.1.1), useful within unstructured environments. With a calibrated system and environment, it is possible to detect and compute, in real time, the pose of a target of interest. The combination of position and orientation or attitude is referred as the pose of an object. For performance analysis and quality of the information obtained, this tools are compared with others already existent.A engenharia de sistemas autónomos actua em diversas vertentes. Uma delas, a visão artificial, em que este trabalho assenta, tornou-se uma das de maior interesse em múltiplos contextos e focos na robótica. Assim, este trabalho procura abordar e superar algumas dificuldades encontradas aquando do desenvolvimento de tecnologias baseadas na visão artificial. Inicialmente, propõe-se a fornecer ferramentas para realizar as calibrações necessárias de intrínsecos (3.2.1) e extrínsecos (3.3) de sistemas de visão stereo em tempo real para atingir o objectivo principal, uma ferramenta de cálculo da posição e orientação de um alvo activo e colorido através de sistemas de visão stereo. Desenhadas para serem intuitivas, fáceis de utilizar e capazes de operar em tempo real, estas ferramentas foram desenvolvidas tendo em vista a sua integração quer com camaras de baixo custo e aquisição fácil como com camaras mais complexas e de maior resolução. Propõem-se a realizar a calibração dos parâmetros inerentes ao sistema de visão stereo como os intrínsecos de cada uma das camaras e as matrizes de extrínsecos que relacionam ambas as camaras. Este trabalho foi orientado para utilização em meio subaquático onde se presenciam ambientes com elevada dinâmica visual e maior complexidade computacional devido `a suas particularidades como reflexões de luz e má visibilidade. Com a informação de calibração disponível, quer gerada pelas ferramentas fornecidas, quer obtida a partir de outras, pode ser carregada para proceder a uma calibração simplista do espaço de cor e dos parâmetros de deteção de um alvo específico com marcadores ativos coloridos (4.1.1). Estes marcadores são ´uteis em ambientes não estruturados. Para análise da performance e qualidade da informação obtida, as ferramentas de calibração e cálculo de pose (posição e orientação), serão comparadas com outras já existentes

    GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB

    Full text link
    We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage
    corecore