188 research outputs found

    Quantifying bioalbedo:a new physically based model and discussions of empirical methods for characterising biological influence on ice and snow albedo

    Get PDF
    The darkening effects of biological impurities on ice and snow have been recognised as a control on the surface energy balance of terrestrial snow, sea ice, glaciers and ice sheets. With a heightened interest in understanding the impacts of a changing climate on snow and ice processes, quantifying the impact of biological impurities on ice and snow albedo (bioalbedo) and its evolution through time is a rapidly growing field of research. However, rigorous quantification of bioalbedo has remained elusive because of difficulties in isolating the biological contribution to ice albedo from that of inorganic impurities and the variable optical properties of the ice itself. For this reason, isolation of the biological signature in reflectance data obtained from aerial/orbital platforms has not been achieved, even when ground-based biological measurements have been available. This paper provides the cell-specific optical properties that are required to model the spectral signatures and broadband darkening of ice. Applying radiative transfer theory, these properties provide the physical basis needed to link biological and glaciological ground measurements with remotely sensed reflectance data. Using these new capabilities we confirm that biological impurities can influence ice albedo, then we identify 10 challenges to the measurement of bioalbedo in the field with the aim of improving future experimental designs to better quantify bioalbedo feedbacks. These challenges are (1) ambiguity in terminology, (2) characterising snow or ice optical properties, (3) characterising solar irradiance, (4) determining optical properties of cells, (5) measuring biomass, (6) characterising vertical distribution of cells, (7) characterising abiotic impurities, (8) surface anisotropy, (9) measuring indirect albedo feedbacks, and (10) measurement and instrument configurations. This paper aims to provide a broad audience of glaciologists and biologists with an overview of radiative transfer and albedo that could support future experimental design

    Studies of global cloud field using measurements of GOME, SCIAMACHY and GOME-2

    Get PDF
    Tropospheric clouds are main players in the Earth climate system. Characterization of long-term global and regional cloud properties aims to support trace-gases retrieval, radiative budget assessment, and analysis of interactions with particles in the atmosphere. The information needed for the determination of cloud properties can be optimally obtained with satellite remote sensing systems. This is because the amount of reflected solar light depends both on macro- and micro-physical characteristics of clouds. At the time of writing, the spaceborne nadir-viewing Global Ozone Monitoring Experiment (GOME), together with the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) and GOME-2, make available a unique record of almost 17 years (June 1996 throughout May 2012) of global top-of-atmosphere (TOA) reflectances and form the observational basis of this work. They probe the atmosphere in the ultraviolet, visible and infrared regions of the electromagnetic spectrum. Specifically, in order to infer cloud properties such as optical thickness (COT), spherical albedo (CA), cloud base (CBH) and cloud top (CTH) height, TOA reflectances have been selected inside and around the strong absorption band of molecular oxygen in the wavelength range at 758-772 nm (the O2 A-band). The retrieval is accomplished using the Semi-Analytical CloUd Retrieval Algorithm (SACURA). The physical framework relies on the asymptotic parameterizations of radiative transfer. The generated record has been throughly verified against synthetic datasets as function of cloud and surface parameters, sensing geometries, and instrumental specifications and validated against ground-based retrievals. The error budget analysis shows that SACURA retrieves CTH with an average accuracy of ±400 m, COT within ±20% (given that COT > 5) and places CTH closer to ground-based radar-derived CTH, as compared to independent satellite-based retrievals. In the considered time period the global average CTH is 5.2±3.0 km, for a corresponding average COT of 20.5±16.1 and CA of 0.62±0.11. Using linear least-squares techniques, global trend in deseasonalized CTH has been found to be -1.78±2.14 m * year-1 in the latitude belt ±60°, with diverging tendency over land ( 0.27±3.2 m * year-1) and water (-2.51±2.8 m * year-1) masses. The El Nino-Southern Oscillation (ENSO), observed through CTH and cloud fraction (CF) values over the Pacific Ocean, pulls clouds to lower altitudes. It is argued that ENSO must be removed for trend analysis. The global ENSO-cleaned trend in CTH amounts to -0.49±2.22 m * year-1. At a global scale, no explicit patterns of statistically significant trends (at 95% confidence level, estimated with bootstrap resampling technique) have been found, which are representative of peculiar natural climate variability. One exception is the Sahara region, which exhibits the strongest upward trend in CTH, sustained by an increasing trend in water vapor. Indeed, the representativeness of every trend is affected by the record length under study. 17 years of cloud data still might not be enough to provide any decisive answer to current open questions involving clouds. The algorithm used in this work can be applied to measurements provided by future planned Earth's observation missions. In this way, the existing cloud record will be extended and attribution of cloud property changes to natural or human causes and assessment of cloud feedback sign within the climate system can be investigated

    Utilisation de l'Apparence pour le Rendu et l'édition efficaces de scènes capturées

    Get PDF
    Computer graphics strives to render synthetic images identical to real photographs. Multiple rendering algorithms have been developed for the better part of the last half-century. Traditional algorithms use 3D assets manually generated by artists to render a scene. While the initial scenes were quite simple, the field has developed complex representations of geometry, material and lighting: the three basic components of a 3D scene. Generating such complex assets is hard and requires significant time and skills by professional 3D artists. In addition to asset generation, the rendering algorithms themselves involve complex simulation techniques to solve for global light transport in a scene which costs more time.As the ease of capturing photographs improved, Image-based Rendering (IBR) emerged as an alternative to traditional rendering. Using captured images as input became much faster than generating traditional scene assets. Initial IBR algorithms focused on creating a scene model using the input images to interpolate or warp them and enable free-viewpoint navigation of captured scenes. With time the scene models became more complex and using a geometric proxy computed from the input images became an integral part of IBR. Today using a mesh reconstructed using Structure-from-Motion (SfM) and Multi-view Stereo (MVS) techniques is widely used in IBR even though they introduce significant artifacts due to noisy reconstruction.In this thesis we first propose a novel image-based rendering algorithm, which focuses on rendering a captured scene with good quality at interactive frame rates}. We study different artifacts from previous IBR algorithms and propose an algorithm which builds upon previous work to remove such artifacts. The algorithm utilizes surface appearance in order to treat view-dependent regions differently than diffuse regions. Our Hybrid-IBR algorithm performs favorably against classical and modern IBR approaches for a wide variety of scenes in terms of quality and/or speed.While IBR provides solutions to render a scene, editing them is hard. Editing scenes require estimating a scene's geometry, material appearance and illumination. As our second contribution \textbf{we explicitly estimate \emph{scene-scale} material parameters from a set of captured photographs to enable scene editing}. While commercial photogrammetry solutions recover diffuse texture to aid 3D artists in generating material assets manually, we aim to \emph{automatically} create material texture atlases from captured images of a scene. We take advantage of the visual cues provided by the multi-view observations. Feeding it to a Convolutional Neural Network (CNN) we obtain material maps for each view. Using the predicted maps we create multi-view consistent material texture atlases by aggregating the information in texture space. Using our automatically generated material texture atlases we demonstrate relighting and object insertion in real scenes.Learning-based tasks require large amounts of data with variety to learn the task efficiently. Using synthetic datasets to train is the norm but using traditional rendering to render large datasets is time consuming providing limited variability. We propose \textbf{a new neural rendering-based approach that learns a neural scene representation with variability and use it to generate large amounts of data at a significantly faster rate on the fly}. We demonstrate the advantage of using neural rendering as compared to traditional rendering in terms of speed of generating dataset as well as learning auxiliary tasks given the same computational budget.L’informatique graphique a pour but de rendre des images de synthèse semblables à des photographies. Plusieurs algorithmes de rendu ont été développés au cours du dernier demi-siècle, principalement pour restituer des scènes à base d'éléments 3D créés par des artistes. Alors que les scènes initiales étaient assez simples, des représentations plus complexes de la géométrie, des matériaux et de l'éclairage ont été développés. Créer des scènes aussi complexes nécessite beaucoup de travail et de compétences de la part d'artistes 3D professionnels. Au même temps, les algorithmes de rendu impliquent des techniques de simulation complexes coûteuses en temps, pour résoudre le transport global de la lumière dans une scène.Avec la popularité grandissante de la photo numérique, le rendu basé image (IBR) a émergé comme une alternative au rendu traditionnel. Avec cette approche, l'utilisation de photos comme données d'entrée est devenue beaucoup plus rapide que la génération de scènes classiques. Les algorithmes IBR se sont d’abord concentrés sur la restitution de scènes pour en permettre une exploration libre. Au fil du temps, les modèles de scène sont devenus plus complexes et l'utilisation d'un proxy géométrique inféré à partir d’images est devenue la norme. Aujourd'hui, l'utilisation d'un maillage reconstruit à l'aide des techniques Structure-from-Motion (SfM) et Multi-view Stereo (MVS) est courante en IBR, bien que cette utilisation introduit des artefacts importants. Nous proposons d'abord un nouvel algorithme de rendu basé image, qui se concentre sur le rendu de qualité et en temps interactif d'une scène capturée}. Nous étudions différentes faiblesses des travaux précédents et proposons un algorithme qui s'appuie sur ces travaux pour obtenir de meilleurs résultats. Notre algorithme se base sur l'apparence de la surface pour traiter les régions dont l'apparence dépend de l'angle de vue différemment des régions diffuses. Hybrid-IBR obtient des résultats favorables par rapport aux approches concurrentes pour une grande variété de scènes en termes de qualité et/ou de vitesse.Bien que l'IBR soit une bonne solution de rendu, l'édition de celle-ci est difficile sans une décomposition en différents éléments : la géométrie, l'apparence des matériaux et l'éclairage de la scène. Pour notre deuxième contribution, \textbf{nous estimons explicitement les paramètres de matériaux à \emph{l'échelle de la scène} à partir d'un ensemble de photographies, pour permettre l'édition de la scène}. Alors que les solutions de photogrammétrie commerciales calculent la texture diffuse pour assister la création manuelle de matériaux, nous visons à créer \emph{automatiquement} des atlas de texture de matériaux à partir d'un ensemble d'images d'une scène. Nous nous appuyons sur les informations fournis par ces images et les transmettons à un réseau neuronal convolutif pour obtenir des cartes de matériaux pour chaque vue. En utilisant toutes ces prédictions, nous créons des atlas de texture de matériau cohérents pour toutes les vues en agrégeant les informations dans l'espace texture. Nous démontrons l'utilisation de notre atlas de texture de matériaux généré automatiquement pour rendre des scènes réelles avec un changement d’illumination et avec des objets virtuels insérés.L'apprentissage profond nécessite de grandes quantités de données variées. L'utilisation de données synthétiques est courante, mais l'utilisation du rendu traditionnel pour créer ces données prend du temps et offre une variabilité limitée. Nous proposons \textbf{une nouvelle approche basée sur le rendu neuronal qui apprend une représentation de scène neuronale avec paramètres variables, et l'utilise pour générer au vol de grandes quantités de données à un rythme beaucoup plus rapide}. Nous démontrons l'avantage d'utiliser le rendu neuronal par rapport au rendu traditionnel en termes de budget de temps, ainsi que pour l'apprentissage de tâches auxiliaires avec le même budget de calcul

    Algorithm theoretical basis document

    Get PDF
    • …
    corecore