2,502 research outputs found

    Learning to Dehaze from Realistic Scene with A Fast Physics-based Dehazing Network

    Full text link
    Dehazing is a popular computer vision topic for long. A real-time dehazing method with reliable performance is highly desired for many applications such as autonomous driving. While recent learning-based methods require datasets containing pairs of hazy images and clean ground truth references, it is generally impossible to capture accurate ground truth in real scenes. Many existing works compromise this difficulty to generate hazy images by rendering the haze from depth on common RGBD datasets using the haze imaging model. However, there is still a gap between the synthetic datasets and real hazy images as large datasets with high-quality depth are mostly indoor and depth maps for outdoor are imprecise. In this paper, we complement the existing datasets with a new, large, and diverse dehazing dataset containing real outdoor scenes from High-Definition (HD) 3D movies. We select a large number of high-quality frames of real outdoor scenes and render haze on them using depth from stereo. Our dataset is more realistic than existing ones and we demonstrate that using this dataset greatly improves the dehazing performance on real scenes. In addition to the dataset, we also propose a light and reliable dehazing network inspired by the physics model. Our approach outperforms other methods by a large margin and becomes the new state-of-the-art method. Moreover, the light-weight design of the network enables our method to run at a real-time speed, which is much faster than other baseline methods

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une pléthore de tâches, de la composition numérique au ré-éclairage d’une image, en passant par la reconstruction 3D d’objets. Ces tâches permettent aux artistes visuels de réaliser des chef-d’oeuvres ou d’aider des opérateurs à prendre des décisions de façon sécuritaire en fonction de stimulis visuels. Pour beaucoup de ces tâches, les modèles physiques et géométriques que la communauté scientifique a développés donnent lieu à des problèmes mal posés possédant plusieurs solutions, dont généralement une seule est raisonnable. Pour résoudre ces indéterminations, le raisonnement sur le contexte visuel et sémantique d’une scène est habituellement relayé à un artiste ou un expert qui emploie son expérience pour réaliser son travail. Ceci est dû au fait qu’il est généralement nécessaire de raisonner sur la scène de façon globale afin d’obtenir des résultats plausibles et appréciables. Serait-il possible de modéliser l’expérience à partir de données visuelles et d’automatiser en partie ou en totalité ces tâches ? Le sujet de cette thèse est celui-ci : la modélisation d’a priori par apprentissage automatique profond pour permettre la résolution de problèmes typiquement mal posés. Plus spécifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photométrie, 2) l’estimation d’illumination extérieure à partir d’une seule image et 3) l’estimation de calibration de caméra à partir d’une seule image avec un contenu générique. Ces trois sujets seront abordés avec une perspective axée sur les données. Chacun de ces axes comporte des analyses de performance approfondies et, malgré la réputation d’opacité des algorithmes d’apprentissage machine profonds, nous proposons des études sur les indices visuels captés par nos méthodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    Aetherius: Real-Time Volumetric Cloud Generation Tool for Unity

    Get PDF
    This thesis describes the development of Aetherius, a Unity tool which can generate and visualize virtually endless and unique cloudscapes in real-time dynamically; The resulting tool can be used in videogames to easily and quickly create immersive and dynamic skies without wasting resources in the development of a dedicated system. Developing a volumetric cloud system is complicated and especially small studios do not have the resources to create such systems for their skies. The objective of this project is to provide an accessible and easy to use alternative for small studios and indie developers to turn static, boring and featureless skies into high quality ones. In this document the problems encountered during the development of the tool and the techniques used to generate, render and optimize cloudscapes are described; to test the tool’s usefulness this project includes the creation of a small demo application

    Identification of urban surface materials using high-resolution hyperspectral aerial imagery

    Full text link
    La connaissance des matériaux de surface est essentielle pour l’aménagement et la gestion des villes. Avec les avancées en télédétection, particulièrement en imagerie de haute résolution spatiale et spectrale, l’identification et la cartographie détaillée des matériaux de surface en milieu urbain sont maintenant envisageables. Les signatures spectrales décrivent les interactions entre les objets au sol et le rayonnement solaire, et elles sont supposées uniques pour chaque type de matériau de surface. Dans ce projet de recherche nous avons utilisé des images hyperspectrales aériennes du capteur CASI, avec une résolution de 1 m2 et 96 bandes contigües entre 380nm et 1040nm. Ces images couvrant l’île de Montréal (QC, Canada), acquises en 2016, ont été analysées pour identifier les matériaux de surfaces. Pour atteindre ces objectifs, notre méthode d’analyse est fondée sur la comparaison des signatures spectrales d’un pixel quelconque à celles des objets typiques contenues dans des bibliothèques spectrales (matériaux inertes et végétation). Pour mesurer la correspondance entre la signature spectrale d’un objet et la signature spectrale de référence nous avons utilisé deux métriques. La première métrique tient compte de la forme d’une signature spectrale et la seconde, de la différence des valeurs de réflectance entre la signature spectrale observée et celle de référence. Un classificateur flou utilisant ces deux métriques est alors appliqué afin de reconnaître le type de matériau de surface sur la base du pixel. Des signatures spectrales typiques ont été extraites des deux librairies spectrales (ASTER et HYPERCUBE). Des signatures spectrales des objets typiques à Montréal mesurées sur le terrain (spectroradiomètre ASD) ont été aussi utilisées comme références. Trois grandes catégories de matériaux ont été identifiées dans les images pour faciliter la comparaison entre les classifications par source de références spectrales : l’asphalte, le béton et la végétation. La classification utilisant ASTER comme bibliothèque de référence a eu le plus grand taux de réussite avec 92%, suivi par ASD à 88% et finalement HYPERCUBE avec 80%. Nous 5 n’avons pas trouvé de différences significatives entre les trois résultats, ce qui indique que la classification est indépendante de la source des signatures spectrales de référence.Knowledge of surface cover materials is crucial for urban planning and management. With advances in remote sensing, especially in high spatial and spectral resolution imagery, the identification and detailed mapping of surface materials in urban areas based on spectral signatures are now feasible. Spectral signatures describe the interactions between ground objects and solar radiation and are assumed unique for each type of material. In this research, we use airborne CASI images with 1 m2 spatial resolution, with 96 contiguous bands in a spectral range between 367 nm and 1044 nm. These images covering the island of Montreal (Quebec, Canada), obtained in 2016, were analyzed to identify urban surface materials. The objectives of the project were first to find a correspondence between the physical and chemical characteristic of typical surface materials, present in the Montreal scenes, and the spectral signatures within the images. Second, to develop a sound methodology for identifying these surface materials in urban landscapes. To reach these objectives, our method of analysis is based on a comparison of pixel spectral signatures to those contained in a reference spectral library that describe typical surface covering materials (inert materials and vegetation). Two metrics were used in order to measure the correspondence of pixel spectral signatures and reference spectral signature. The first metric considers the shape of a spectral signature and the second the difference of reflectance values between the observed and reference spectral signature. A fuzzy classifier using these two metrics is then applied to recognize the type of material on a pixel basis. Typical spectral signatures were extracted from two spectral libraries (ASTER and HYPERCUBE). Spectral signatures of typical objects in Montreal measured on the ground (ASD spectroradiometer) were also used as reference spectra. Three general types of surface materials (asphalt, concrete, and vegetation) were used to ease the comparison between classifications using these spectral libraries. The classification using ASTER as a reference library had the highest success rate reaching 92%, followed by the field spectra at 88%, and finally with HYPERCUBE at 80%. There were no significant differences in the classification results indicating that the methodology works independently of the source of reference spectral signatures

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif

    How Universal is the Relationship Between Remotely Sensed Vegetation Indices and Crop Leaf Area Index? A Global Assessment

    Get PDF
    Leaf Area Index (LAI) is a key variable that bridges remote sensing observations to the quantification of agroecosystem processes. In this study, we assessed the universality of the relationships between crop LAI and remotely sensed Vegetation Indices (VIs). We first compiled a global dataset of 1459 in situ quality-controlled crop LAI measurements and collected Landsat satellite images to derive five different VIs including Simple Ratio (SR), Normalized Difference Vegetation Index (NDVI), two versions of the Enhanced Vegetation Index (EVI and EVI2), and Green Chlorophyll Index (CI(sub Green)). Based on this dataset, we developed global LAI-VI relationships for each crop type and VI using symbolic regression and Theil-Sen (TS) robust estimator. Results suggest that the global LAI-VI relationships are statistically significant, crop-specific, and mostly non-linear. These relationships explain more than half of the total variance in ground LAI observations (R2 greater than 0.5), and provide LAI estimates with RMSE below 1.2 m2/m2. Among the five VIs, EVI/EVI2 are the most effective, and the crop-specific LAI-EVI and LAI-EVI2 relationships constructed by TS, are robust when tested by three independent validation datasets of varied spatial scales. While the heterogeneity of agricultural landscapes leads to a diverse set of local LAI-VI relationships, the relationships provided here represent global universality on an average basis, allowing the generation of large-scale spatial-explicit LAI maps. This study contributes to the operationalization of large-area crop modeling and, by extension, has relevance to both fundamental and applied agroecosystem research

    Photorealistic physically based render engines: a comparative study

    Full text link
    PĂ©rez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    ARRAY BASED FREE SPACE OPTIC SYSTEM FOR TACTICAL COMMUNICATIONS

    Get PDF
    Free-space optical (FSO) communications offer a resilient and flexible alternative communications medium to current radio technologies, which are increasingly threatened by our peer adversaries. FSO provides many advantages to radio technologies, including higher bandwidth capability and increased security through its low probability of detection (LPD) and low probability of interception (LPI) characteristics. However, current FSO systems are limited in range due to line-of-sight requirements and suffer loss from atmospheric attenuation. This thesis proposes the use of arrayed optical emitters for FSO communication by developing a link-layer protocol that leverages the inherent error correction of quick response (QR) encoding to increase bandwidth and overcome atmospheric loss. Through the testing of a system built with commercial-off-the-shelf equipment and a survey of current optical transmitter and receiver technology, this link-layer protocol was validated and estimated to provide similar data rates to current single emitter FSO systems. Various limitations were discovered in the current structure of the protocol. Future work should be conducted to correct inefficiencies in the QR encoding format when applied to a transmission medium. Additionally, technological advancements in hardware systems, including the large-scale production of VCSELs and faster high-speed cameras, must be achieved before such an FSO would be viable for large-scale use.http://archive.org/details/arraybasedfreesp1094559655Captain, United States Marine CorpsApproved for public release; distribution is unlimited
    • …
    corecore