3,071 research outputs found

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes

    The 27-28 October 1986 FIRE IFO cirrus case study: Cirrus parameter relationships derived from satellite and lidar data

    Get PDF
    Cirrus cloud radiative and physical characteristics are determined using a combination of ground-based, aircraft, and satellite measurements taken as part of the First ISCCP Regional Experiment (FIRE) Cirrus Intensive Field Observations (IFO) during October and November 1986. Lidar backscatter data are used to define cloud base, center, and top heights and the corresponding temperatures. Coincident GOES 4 km visible (0.65 microns) and 8 km infrared window (11.5 microns) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance mode. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8 km for the 71 scenes. An average visible scattering efficiency of 2.1 was found for this data set. The results reveal a significant dependence of scattering efficiency on cloud temperature

    Study of time-lapse processing for dynamic hydrologic conditions

    Get PDF
    The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une plĂ©thore de tĂąches, de la composition numĂ©rique au rĂ©-Ă©clairage d’une image, en passant par la reconstruction 3D d’objets. Ces tĂąches permettent aux artistes visuels de rĂ©aliser des chef-d’oeuvres ou d’aider des opĂ©rateurs Ă  prendre des dĂ©cisions de façon sĂ©curitaire en fonction de stimulis visuels. Pour beaucoup de ces tĂąches, les modĂšles physiques et gĂ©omĂ©triques que la communautĂ© scientifique a dĂ©veloppĂ©s donnent lieu Ă  des problĂšmes mal posĂ©s possĂ©dant plusieurs solutions, dont gĂ©nĂ©ralement une seule est raisonnable. Pour rĂ©soudre ces indĂ©terminations, le raisonnement sur le contexte visuel et sĂ©mantique d’une scĂšne est habituellement relayĂ© Ă  un artiste ou un expert qui emploie son expĂ©rience pour rĂ©aliser son travail. Ceci est dĂ» au fait qu’il est gĂ©nĂ©ralement nĂ©cessaire de raisonner sur la scĂšne de façon globale afin d’obtenir des rĂ©sultats plausibles et apprĂ©ciables. Serait-il possible de modĂ©liser l’expĂ©rience Ă  partir de donnĂ©es visuelles et d’automatiser en partie ou en totalitĂ© ces tĂąches ? Le sujet de cette thĂšse est celui-ci : la modĂ©lisation d’a priori par apprentissage automatique profond pour permettre la rĂ©solution de problĂšmes typiquement mal posĂ©s. Plus spĂ©cifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photomĂ©trie, 2) l’estimation d’illumination extĂ©rieure Ă  partir d’une seule image et 3) l’estimation de calibration de camĂ©ra Ă  partir d’une seule image avec un contenu gĂ©nĂ©rique. Ces trois sujets seront abordĂ©s avec une perspective axĂ©e sur les donnĂ©es. Chacun de ces axes comporte des analyses de performance approfondies et, malgrĂ© la rĂ©putation d’opacitĂ© des algorithmes d’apprentissage machine profonds, nous proposons des Ă©tudes sur les indices visuels captĂ©s par nos mĂ©thodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques

    Get PDF
    The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed

    Colour vision model-based approach for segmentation of traffic signs

    Get PDF
    This paper presents a new approach to segment traffic signs from the rest of a scene via CIECAM, a colour appearance model. This approach not only takes CIECAM into practical application for the first time since it was standardised in 1998, but also introduces a new way of segmenting traffic signs in order to improve the accuracy of colour-based approach. Comparison with the other CIE spaces, including CIELUV and CIELAB, and RGB colour space is also carried out. The results show that CIECAM performs better than the other three spaces with 94%, 90%, and 85% accurate rates for sunny, cloudy, and rainy days, respectively. The results also confirm that CIECAM does predict the colour appearance similar to average observers

    Combining visible and infrared radiometry and lidar data to test simulations in clear and ice cloud conditions

    Get PDF
    Measurements taken during the 2003 Pacific THORPEX Observing System Test (P-TOST) by the MODIS Airborne Simulator (MAS), the Scanning High-resolution Interferometer Sounder (S-HIS) and the Cloud Physics Lidar (CPL) are compared to simulations performed with a line-by-line and multiple scattering modeling methodology (LBLMS). Formerly used for infrared hyper-spectral data analysis, LBLMS has been extended to the visible and near infrared with the inclusion of surface bi-directional reflectance properties. A number of scenes are evaluated: two clear scenes, one with nadir geometry and one cross-track encompassing sun glint, and three cloudy scenes, all with nadir geometry. <br><br> CPL data is used to estimate the particulate optical depth at 532 nm for the clear and cloudy scenes and cloud upper and lower boundaries. Cloud optical depth is retrieved from S-HIS infrared window radiances, and it agrees with CPL values, to within natural variability. MAS data are simulated convolving high resolution radiances. The paper discusses the results of the comparisons for the clear and cloudy cases. LBLMS clear simulations agree with MAS data to within 20% in the shortwave (SW) and near infrared (NIR) spectrum and within 2 K in the infrared (IR) range. It is shown that cloudy sky simulations using cloud parameters retrieved from IR radiances systematically underestimate the measured radiance in the SW and NIR by nearly 50%, although the IR retrieved optical thickness agree with same measured by CPL. <br><br> MODIS radiances measured from Terra are also compared to LBLMS simulations in cloudy conditions, using retrieved cloud optical depth and effective radius from MODIS, to understand the origin for the observed discrepancies. It is shown that the simulations agree, to within natural variability, with measurements in selected MODIS SW bands. <br><br> The impact of the assumed particles size distribution and vertical profile of ice content on results is evaluated. Sensitivity is much smaller than differences between measured and simulated radiances in the SW and NIR. <br><br> The paper dwells on a possible explanation of these contradictory results, involving the phase function of ice particles in the shortwave

    Study of the effect of cloud inhomogeneity on the earth radiation budget experiment

    Get PDF
    The Earth Radiation Budget Experiment (ERBE) is the most recent and probably the most intensive mission designed to gather precise measurements of the Earth's radiation components. The data obtained from ERBE is of great importance for future climatological studies. A statistical study reveals that the ERBE scanner data are highly correlated and that instantaneous measurements corresponding to neighboring pixels contain almost the same information. Analyzing only a fraction of the data set when sampling is suggested and applications of this strategy are given in the calculation of the albedo of the Earth and of the cloud-forcing over ocean

    An Analysis of the Radiometric Quality of Small Unmanned Aircraft System Imagery

    Get PDF
    In recent years, significant advancements have been made in both sensor technology and small Unmanned Aircraft Systems (sUAS). Improved sensor technology has provided users with cheaper, lighter, and higher resolution imaging tools, while new sUAS platforms have become cheaper, more stable and easier to navigate both manually and programmatically. These enhancements have provided remote sensing solutions for both commercial and research applications that were previously unachievable. However, this has provided non-scientific practitioners with access to technology and techniques previously only available to remote sensing professionals, sometimes leading to improper diagnoses and results. The work accomplished in this dissertation demonstrates the impact of proper calibration and reflectance correction on the radiometric quality of sUAS imagery. The first part of this research conducts an in-depth investigation into a proposed technique for radiance-to-reflectance conversion. Previous techniques utilized reflectance conversion panels in-scene, which, while providing accurate results, required extensive time in the field to position the panels as well as measure them. We have positioned sensors on board the sUAS to record the downwelling irradiance which then can be used to produce reflectance imagery without the use of these reflectance conversion panels. The second part of this research characterizes and calibrates a MicaSense RedEdge-3, a multispectral imaging sensor. This particular sensor comes pre-loaded with metadata values, which are never recalibrated, for dark level bias, vignette and row-gradient correction and radiometric calibration. This characterization and calibration studies were accomplished to demonstrate the importance of recalibration of any sensors over a period of time. In addition, an error propagation was performed to detect the highest contributors of error in the production of radiance and reflectance imagery. Finally, a study of the inherent reflectance variability of vegetation was performed. In other words, this study attempts to determine how accurate the digital count to radiance calibration and the radiance to reflectance conversion has to be. Can we lower our accuracy standards for radiance and reflectance imagery, because the target itself is too variable to measure? For this study, six Coneflower plants were analyzed, as a surrogate for other cash crops, under different illumination conditions, at different times of the day, and at different ground sample distances (GSDs)
    • 

    corecore