87 research outputs found

    Evaluation of the effectiveness of HDR tone-mapping operators for photogrammetric applications

    Get PDF
    [EN] The ability of High Dynamic Range (HDR) imaging to capture the full range of lighting in a scene has meant that it is being increasingly used for Cultural Heritage (CH) applications. Photogrammetric techniques allow the semi-automatic production of 3D models from a sequence of images. Current photogrammetric methods are not always effective in reconstructing images under harsh lighting conditions, as significant geometric details may not have been captured accurately within under- and over-exposed regions of the image. HDR imaging offers the possibility to overcome this limitation, however the HDR images need to be tone mapped before they can be used within existing photogrammetric algorithms. In this paper we evaluate four different HDR tone-mapping operators (TMOs) that have been used to convert raw HDR images into a format suitable for state-of-the-art algorithms, and in particular keypoint detection techniques. The evaluation criteria used are the number of keypoints, the number of valid matches achieved and the repeatability rate. The comparison considers two local and two global TMOs. HDR data from four CH sites were used: Kaisariani Monastery (Greece), Asinou Church (Cyprus), Château des Baux (France) and Buonconsiglio Castle (Italy).We would like to thank Kurt Debattista, Timothy Bradley, Ratnajit Mukherjee, Diego Bellido Castañeda and TomBashford Rogers for their suggestions, help and encouragement. We would like to thank the hosting institutions: 3D Optical Metrology Group, FBK (Trento, Italy) and UMR 3495 MAP CNRS/MCC (Marseille, France), for their support during the data acquisition campaigns. This project has received funding from the European Union’s 7 th Framework Programme for research, technological development and demonstration under grant agreement No. 608013, titled “ITN-DCH: Initial Training Network for Digital Cultural Heritage: Projecting our Past to the Future”.Suma, R.; Stavropoulou, G.; Stathopoulou, EK.; Van Gool, L.; Georgopoulos, A.; Chalmers, A. (2016). Evaluation of the effectiveness of HDR tone-mapping operators for photogrammetric applications. Virtual Archaeology Review. 7(15):54-66. https://doi.org/10.4995/var.2016.6319SWORD546671

    Multiple layers of contrasted images for robust feature-based visual tracking

    Get PDF
    International audienceFeature-based SLAM (Simultaneous Localization and Mapping) techniques rely on low-level contrast information extracted from images to detect and track keypoints. This process is known to be sensitive to changes in illumination of the environment that can lead to tracking failures. This paper proposes a multi-layered image representation (MLI) that computes and stores different contrast-enhanced versions of an original image. Keypoint detection is performed on each layer, yielding better robustness to light changes. An optimization technique is also proposed to compute the best contrast enhancements to apply in each layer. Results demonstrate the benefits of MLI when using the main keypoint detectors from ORB, SIFT or SURF, and shows significant improvement in SLAM robustness

    Feature point detection in HDR images based on coefficient of variation

    Full text link
    Feature point (FP) detection is a fundamental step of many computer vision tasks. However, FP detectors are usually designed for low dynamic range (LDR) images. In scenes with extreme light conditions, LDR images present saturated pixels, which degrade FP detection. On the other hand, high dynamic range (HDR) images usually present no saturated pixels but FP detection algorithms do not take advantage of all the information present in such images. FP detection frequently relies on differential methods, which work well in LDR images. However, in HDR images, the differential operation response in bright areas overshadows the response in dark areas. As an alternative to standard FP detection methods, this study proposes an FP detector based on a coefficient of variation (CV) designed for HDR images. The CV operation adapts its response based on the standard deviation of pixels inside a window, working well in both dark and bright areas of HDR images. The proposed and standard detectors are evaluated by measuring their repeatability rate (RR) and uniformity. Our proposed detector shows better performance when compared to other standard state-of-the-art detectors. In uniformity metric, our proposed detector surpasses all the other algorithms. In other hand, when using the repeatability rate metric, the proposed detector is worse than Harris for HDR and SURF detectors

    Image Stitching

    Get PDF
    Projecte final de carrera fet en col.laboració amb University of Limerick. Department of Electronic and Computer EngineeringEnglish: Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it. Specifically, image stitching presents different stages to render two or more overlapping images into a seamless stitched image, from the detection of features to blending in a final image. In this process, Scale Invariant Feature Transform (SIFT) algorithm can be applied to perform the detection and matching control points step, due to its good properties. The process of create an automatic and effective whole stitching process leads to analyze different methods of the stitching stages. Several commercial and online software tools are available to perform the stitching process, offering diverse options in different situations. This analysis involves the creation of a script to deal with images and project data files. Once the whole script is generated, the stitching process is able to achieve an automatic execution allowing good quality results in the final composite image.Castellano: Procesado de imagen es cualquier tipo de procesado de señal en aquel que la entrada es una imagen, como una fotografía o fotograma de video; la salida puede ser una imagen o conjunto de características y parámetros relacionados con la imagen. Muchas de las técnicas de procesado de imagen implican un tratamiento de la imagen como señal en dos dimensiones, y para ello se aplican técnicas estándar de procesado de señal. Concretamente, la costura o unión de imágenes presenta diferentes etapas para unir dos o más imágenes superpuestas en una imagen perfecta sin costuras, desde la detección de puntos clave en las imágenes hasta su mezcla en la imagen final. En este proceso, el algoritmo Scale Invariant Feature Transform (SIFT) puede ser aplicado para desarrollar la fase de detección y selección de correspondencias entre imágenes debido a sus buenas cualidades. El desarrollo de la creación de un completo proceso de costura automático y efectivo, pasa por analizar diferentes métodos de las etapas del cosido de las imágenes. Varios software comerciales y gratuitos son capaces de llevar a cabo el proceso de costura, ofreciendo diferentes alternativas en distintas situaciones. Este análisis implica la creación de una secuencia de comandos que trabaja con las imágenes y con archivos de datos del proyecto generado. Una vez esta secuencia es creada, el proceso de cosido de imágenes es capaz de lograr una ejecución automática permitiendo unos resultados de calidad en la imagen final.Català: Processament d'imatge és qualsevol tipus de processat de senyal en aquell que l'entrada és una imatge, com una fotografia o fotograma de vídeo, i la sortida pot ser una imatge o conjunt de característiques i paràmetres relacionats amb la imatge. Moltes de les tècniques de processat d'imatge impliquen un tractament de la imatge com a senyal en dues dimensions, i per això s'apliquen tècniques estàndard de processament de senyal. Concretament, la costura o unió d'imatges presenta diferents etapes per unir dues o més imatges superposades en una imatge perfecta sense costures, des de la detecció de punts clau en les imatges fins a la seva barreja en la imatge final. En aquest procés, l'algoritme Scale Invariant Feature Transform (SIFT) pot ser aplicat per desenvolupar la fase de detecció i selecció de correspondències entre imatges a causa de les seves bones qualitats. El desenvolupament de la creació d'un complet procés de costura automàtic i efectiu, passa per analitzar diferents mètodes de les etapes del cosit de les imatges. Diversos programari comercials i gratuïts són capaços de dur a terme el procés de costura, oferint diferents alternatives en diverses situacions. Aquesta anàlisi implica la creació d'una seqüència de commandes que treballa amb les imatges i amb arxius de dades del projecte generat. Un cop aquesta seqüència és creada, el procés de cosit d'imatges és capaç d'aconseguir una execució automàtica permetent uns resultats de qualitat en la imatge final

    {MEYE}: Web-app for translational and real-time pupillometry

    Get PDF
    Pupil dynamics alterations have been found in patients affected by a variety of neuropsychiatric conditions, in- cluding autism. Studies in mouse models have used pupillometry for phenotypic assessment and as a proxy for arousal. Both in mice and humans, pupillometry is noninvasive and allows for longitudinal experiments sup- porting temporal specificity; however, its measure requires dedicated setups. Here, we introduce a convolu- tional neural network that performs online pupillometry in both mice and humans in a web app format. This solution dramatically simplifies the usage of the tool for the nonspecialist and nontechnical operators. Because a modern web browser is the only software requirement, this choice is of great interest given its easy deployment and setup time reduction. The tested model performances indicate that the tool is sensitive enough to detect both locomotor-induced and stimulus-evoked pupillary changes, and its output is compara- ble to state-of-the-art commercial devicesPupil dynamics alterations have been found in patients affected by a variety of neuropsychiatric conditions, including autism. Studies in mouse models have used pupillometry for phenotypic assessment and as a proxy for arousal. Both in mice and humans, pupillometry is noninvasive and allows for longitudinal experiments supporting temporal specificity; however, its measure requires dedicated setups. Here, we introduce a convolutional neural network that performs online pupillometry in both mice and humans in a web app format. This solution dramatically simplifies the usage of the tool for the nonspecialist and nontechnical operators. Because a modern web browser is the only software requirement, this choice is of great interest given its easy deployment and setup time reduction. The tested model performances indicate that the tool is sensitive enough to detect both locomotor-induced and stimulus-evoked pupillary changes, and its output is comparable to state-of-the-art commercial devices

    Neural Radiance Fields: Past, Present, and Future

    Full text link
    The various aspects like modeling and interpreting 3D environments and surroundings have enticed humans to progress their research in 3D Computer Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published. This paper serves as a bridge for people starting to study these fields by building on the basics of Mathematics, Geometry, Computer Vision, and Computer Graphics to the difficulties encountered in Implicit Representations at the intersection of all these disciplines. This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world. In doing so, this survey categorizes all the NeRF-related research in terms of the datasets used, objective functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation

    USING CNNS TO UNDERSTAND LIGHTING WITHOUT REAL LABELED TRAINING DATA

    Get PDF
    The task of computer vision is to make computers understand the physical word through images. Lighting is the medium through which we capture images of the physical world. Without lighting, there is no image, and dierent lighting leads to dierent images of the same physical world. In this dissertation, we study how to understand lighting from images. With the emergence of large datasets and deep learning in recent years, learning based methods play a more and more important role in computer vision, and deep Convolutional Neural Networks (CNNs) now dominate most of the problems in computer vision. Despite their success, deep CNNs are notorious for their data hungry nature compared with traditional learning based methods. While collecting images from the internet is easy and fast, labeling those images is both time consuming and expensive, and sometimes, even impossible. In this work, we focus on understanding lighting from faces and natural scenes, in which ground truth labels of the lighting are impossible to achieve. As a preliminary topic, we rst study the capacity of deep CNNs. Designing deep CNNs with less capacity and good generalization is one way to reduce the amount of labeled data needed in training deep CNNs, and understanding the capacity of deep CNNs is the rst step towards that goal. In this work, we empirically study the capacity of deep CNNs by studying the redundancy of parameters in them. More specically, we aim at optimizing the number of neurons in a network, thus the number of parameters. To achieve that goal, we incorporate sparse constraints into the objective function and apply a forward-backward splitting method to solve this sparse constrained optimization problem eciently. The proposed method can signicantly reduce the number of parameters, showing that networks with small capacity can work well. We then study an important problem in computer vision: inverse lighting from a single face image. Lacking massive ground truth lighting labels, we generate a large amount of synthetic data with ground truth lighting to train a deep network. However, due to the large domain gap between real and synthetic data, the network trained using synthetic data cannot generalize well to real data. We thus propose to use real data to train the deep CNN together with synthetic data. We apply an existing method to estimate lighting conditions of real face images. However, these lighting labels are noisy. We then propose a Label Denoising Adversarial Network (LDAN) to make use of these synthetic data to help train a deep CNN to regress lighting from real face images, denoising labels of real images. We have shown that the proposed method can generate more consistent lighting for faces taken under the same lighting condition. Third, we study how to relight a face image using deep CNNs. We formulate this problem as a supervised image to image translation problem. Due to the lack of a "in the wild" face dataset that is suitable for this task, we apply a physically based face relighting method to generate a large scale, high resolution, "in the wild" portrait relighting dataset (DPR). A deep Convolutional Neural Network (CNN) is then trained using this dataset to generate a relighted portrait image by taking a source image and a target lighting as input. We show that our training procedure can regularize the generated results, removing the artifacts caused by physically-based relighting methods. Fourth, we study how to understand lighting from a natural scene based on an RGB image. We propose a Global-Local Spherical Harmonics (GLoSH) lighting model to improve the lighting representation, and jointly predict refectance and surface normals. The global SH models the holistic lighting while local SHs account for the spatial variation of lighting. A novel non-negative lighting constraint is proposed to encourage the estimated SHs to be physically meaningful. To seamlessly make use of the GLoSH model, we design a coarse-to-ne network structure. Lacking labels for refectance and lighting, we apply synthetic data for model pre-training and fine-tune the model with real data in a self-supervised way. We have shown that the proposed method outperforms state-of-the-art methods in understanding lighting, refectance and shading of a natural scene

    Bridging Domain Gaps for Cross-Spectrum and Long-Range Face Recognition Using Domain Adaptive Machine Learning

    Get PDF
    Face recognition technology has witnessed significant advancements in recent decades, enabling its widespread adoption in various applications such as security, surveillance, and biometrics applications. However, one of the primary challenges faced by existing face recognition systems is their limited performance when presented with images from different modalities or domains( such as infrared to visible, long range to close range, nighttime to daytime, profile to f rontal, e tc.) Additionally, advancements in camera sensors, analytics beyond the visible spectrum, and the increasing size of cross-modal datasets have led to a particular interest in cross-modal learning for face recognition in the biometrics and computer vision community. Despite a relatively large gap between source and target domains, existing approaches reduce or bridge such domain gaps by either synthesizing face imagery in the target domain using face imagery from the source domain, or by learning cross-modal image representations that are robust to both the source and the target domain. Therefore, this dissertation presents the design and implementation of a novel domain adaptation framework leveraging robust image representations to achieve state-of-the art performance in cross-spectrum and long-range face recognition. The proposed methods use machine learning and deep learning techniques to (1) efficiently ex tract an d le arn do main-invariant embedding from face imagery, (2) learn a mapping from the source to the target domain, and (3) evaluate the proposed framework on several cross-modal face datasets
    • …
    corecore