1,826 research outputs found

    OutCast: Outdoor Single-image Relighting with Cast Shadows

    Full text link
    We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e.g. using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input. For supplementary material visit our project page at: https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte

    Shadow Detection and Removal using Artificial Neural Network

    Get PDF
    Shadow detection and removal is an important task when dealing with colour images. Shadows are generated by a local and relative absence of light or a shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a preprocessing task in computer vision. This thesis work proposes a simple method to detect and remove shadow from a single RGB image using artificial neural network. A shadow detection method is selected based on the phenomena of back propagation algorithm. Back propagation artificial neural network classifier has been used to train and test the neural network based on the extracted feature. The shadow removal is done by multiplying the shadow region by a constant

    Reconhecimento automático de moedas medievais usando visão por computador

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaThe use of computer vision for identification and recognition of coins is well studied and of renowned interest. However the focus of research has consistently been on modern coins and the used algorithms present quite disappointing results when applied to ancient coins. This discrepancy is explained by the nature of ancient coins that are manually minted, having plenty variances, failures, ripples and centuries of degradation which further deform the characteristic patterns, making their identification a hard task even for humans. Another noteworthy factor in almost all similar studies is the controlled environments and uniform illumination of all images of the datasets. Though it makes sense to focus on the more problematic variables, this is an impossible premise to find outside the researchers’ laboratory, therefore a problematic that must be approached. This dissertation focuses on medieval and ancient coin recognition in uncontrolled “real world” images, thus trying to pave way to the use of vast repositories of coin images all over the internet that could be used to make our algorithms more robust. The first part of the dissertation proposes a fast and automatic method to segment ancient coins over complex backgrounds using a Histogram Backprojection approach combined with edge detection methods. Results are compared against an automation of GrabCut algorithm. The proposed method achieves a Good or Acceptable rate on 76% of the images, taking an average of 0.29s per image, against 49% in 19.58s for GrabCut. Although this work is oriented to ancient coin segmentation, the method can also be used in other contexts presenting thin objects with uniform colors. In the second part, several state of the art machine learning algorithms are compared in the search for the most promising approach to classify these challenging coins. The best results are achieved using dense SIFT descriptors organized into Bags of Visual Words, and using Support Vector Machine or Naïve Bayes as machine learning strategies.O uso de visão por computador para identificação e reconhecimento de moedas é bastante estudado e de reconhecido interesse. No entanto o foco da investigação tem sido sistematicamente sobre as moedas modernas e os algoritmos usados apresentam resultados bastante desapontantes quando aplicados a moedas antigas. Esta discrepância é justificada pela natureza das moedas antigas que, sendo cunhadas à mão, apresentam bastantes variações, falhas e séculos de degradação que deformam os padrões característicos, tornando a sua identificação dificil mesmo para o ser humano. Adicionalmente, a quase totalidade dos estudos usa ambientes controlados e iluminação uniformizada entre todas as imagens dos datasets. Embora faça sentido focar-se nas variáveis mais problemáticas, esta é uma premissa impossível de encontrar fora do laboratório do investigador e portanto uma problemática que tem que ser estudada. Esta dissertação foca-se no reconhecimento de moedas medievais e clássicas em imagens não controladas, tentando assim abrir caminho ao uso de vastos repositórios de imagens de moedas disponíveis na internet, que poderiam ser usados para tornar os nossos algoritmos mais robustos. Na primeira parte é proposto um método rápido e automático para segmentar moedas antigas sobre fundos complexos, numa abordagem que envolve Histogram Backprojection combinado com deteção de arestas. Os resultados são comparados com uma automação do algoritmo GrabCut. O método proposto obtém uma classificação de Bom ou Aceitável em 76% das imagens, demorando uma média de 0.29s por imagem, contra 49% em 19,58s do GrabCut. Não obstante o foco em segmentação de moedas antigas, este método pode ser usado noutros contextos que incluam objetos planos de cor uniforme. Na segunda parte, o estado da arte de Machine Learning é testado e comparado em busca da abordagem mais promissora para classificar estas moedas. Os melhores resultados são alcançados usando descritores dense SIFT, organizados em Bags of Visual Words e usando Support Vector Machine ou Naive Bayes como estratégias de machine learning

    The Hyper-log-chromaticity space for illuminant invariance

    Get PDF
    Variation in illumination conditions through a scene is a common issue for classification, segmentation and recognition applications. Traffic monitoring and driver assistance systems have difficulty with the changing illumination conditions at night, throughout the day, with multiple sources (especially at night) and in the presence of shadows. The majority of existing algorithms for color constancy or shadow detection rely on multiple frames for comparison or to build a background model. The proposed approach uses a novel color space inspired by the Log-Chromaticity space and modifies the bilateral filter to equalize illumination across objects using a single frame. Neighboring pixels of the same color, but of different brightness, are assumed to be of the same object/material. The utility of the algorithm is studied over day and night simulated scenes of varying complexity. The objective is not to provide a product for visual inspection but rather an alternate image with fewer illumination related issues for other algorithms to process. The usefulness of the filter is demonstrated by applying two simple classifiers and comparing the class statistics. The hyper-log-chromaticity image and the filtered image both improve the quality of the classification relative to the un-processed image

    Novel Approach for Detection and Removal of Moving Cast Shadows Based on RGB, HSV and YUV Color Spaces

    Get PDF
    Cast shadow affects computer vision tasks such as image segmentation, object detection and tracking since objects and shadows share the same visual motion characteristics. This unavoidable problem decreases video surveillance system performance. The basic idea of this paper is to exploit the evidence that shadows darken the surface which they are cast upon. For this reason, we propose a simple and accurate method for detection of moving cast shadows based on chromatic properties in RGB, HSV and YUV color spaces. The method requires no a priori assumptions regarding the scene or lighting source. Starting from a normalization step, we apply canny filter to detect the boundary between self-shadow and cast shadow. This treatment is devoted only for the first sequence. Then, we separate between background and moving objects using an improved version of Gaussian mixture model. In order to remove these unwanted shadows completely, we use three change estimators calculated according to the intensity ratio in HSV color space, chromaticity properties in RGB color space, and brightness ratio in YUV color space. Only pixels that satisfy threshold of the three estimators are labeled as shadow and will be removed. Experiments carried out on various video databases prove that the proposed system is robust and efficient and can precisely remove shadows for a wide class of environment and without any assumptions. Experimental results also show that our approach outperforms existing methods and can run in real-time systems

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    Cross-Spectral Face Recognition Between Near-Infrared and Visible Light Modalities.

    Get PDF
    In this thesis, improvement of face recognition performance with the use of images from the visible (VIS) and near-infrared (NIR) spectrum is attempted. Face recognition systems can be adversely affected by scenarios which encounter a significant amount of illumination variation across images of the same subject. Cross-spectral face recognition systems using images collected across the VIS and NIR spectrum can counter the ill-effects of illumination variation by standardising both sets of images. A novel preprocessing technique is proposed, which attempts the transformation of faces across both modalities to a feature space with enhanced correlation. Direct matching across the modalities is not possible due to the inherent spectral differences between NIR and VIS face images. Compared to a VIS light source, NIR radiation has a greater penetrative depth when incident on human skin. This fact, in addition to the greater number of scattering interactions within the skin by rays from the NIR spectrum can alter the morphology of the human face enough to disable a direct match with the corresponding VIS face. Several ways to bridge the gap between NIR-VIS faces have been proposed previously. Mostly of a data-driven approach, these techniques include standardised photometric normalisation techniques and subspace projections. A generative approach driven by a true physical model has not been investigated till now. In this thesis, it is proposed that a large proportion of the scattering interactions present in the NIR spectrum can be accounted for using a model for subsurface scattering. A novel subsurface scattering inversion (SSI) algorithm is developed that implements an inversion approach based on translucent surface rendering by the computer graphics field, whereby the reversal of the first order effects of subsurface scattering is attempted. The SSI algorithm is then evaluated against several preprocessing techniques, and using various permutations of feature extraction and subspace projection algorithms. The results of this evaluation show an improvement in cross spectral face recognition performance using SSI over existing Retinex-based approaches. The top performing combination of an existing photometric normalisation technique, Sequential Chain, is seen to be the best performing with a Rank 1 recognition rate of 92. 5%. In addition, the improvement in performance using non-linear projection models shows an element of non-linearity exists in the relationship between NIR and VIS
    corecore