12,769 research outputs found

    A model-based approach to correcting spectral irradiance data using an upward-looking airborne sensor (CASI ILS)

    No full text
    A number of aircraft sensors have the facility to measure spectral downwelling irradiance using a sensor mounted on the roof of the aircraft, but these data are rarely used for atmospheric correction. Part of the problem is that the attitude of the airborne platform is always changing during flight, even in stable conditions, so that direct use of data from an incident light sensor (ILS) can introduce errors into atmospheric correction methods. The continual motion of the ILS is used here to advantage, as a means to fit a sky radiance distribution model developed by Brunger and Hooper (1993) to data from the Itres Instruments CASI ILS. The inclination of the ILS sensor, due to changing aircraft attitude, is considered as the slope plane in the model. The selected model coefficients correspond to parameterised atmospheric conditions and represent atmospheric transmission and the proportion of direct:diffuse flux. The method was used to correct CASI ILS data acquired over a site in southern England. Comparison with spectral irradiance measured simultaneously on the ground shows that the method reduced the variability of the ILS data and also compensated for the effect of different flight directions. The sky radiance distribution at sensor level is also calculated by the model, and shows the characteristics of the sky conditions at the time of each flight

    Helmet-mounted pilot night vision systems: Human factors issues

    Get PDF
    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology

    Semantic Cross-View Matching

    Full text link
    Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une pléthore de tâches, de la composition numérique au ré-éclairage d’une image, en passant par la reconstruction 3D d’objets. Ces tâches permettent aux artistes visuels de réaliser des chef-d’oeuvres ou d’aider des opérateurs à prendre des décisions de façon sécuritaire en fonction de stimulis visuels. Pour beaucoup de ces tâches, les modèles physiques et géométriques que la communauté scientifique a développés donnent lieu à des problèmes mal posés possédant plusieurs solutions, dont généralement une seule est raisonnable. Pour résoudre ces indéterminations, le raisonnement sur le contexte visuel et sémantique d’une scène est habituellement relayé à un artiste ou un expert qui emploie son expérience pour réaliser son travail. Ceci est dû au fait qu’il est généralement nécessaire de raisonner sur la scène de façon globale afin d’obtenir des résultats plausibles et appréciables. Serait-il possible de modéliser l’expérience à partir de données visuelles et d’automatiser en partie ou en totalité ces tâches ? Le sujet de cette thèse est celui-ci : la modélisation d’a priori par apprentissage automatique profond pour permettre la résolution de problèmes typiquement mal posés. Plus spécifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photométrie, 2) l’estimation d’illumination extérieure à partir d’une seule image et 3) l’estimation de calibration de caméra à partir d’une seule image avec un contenu générique. Ces trois sujets seront abordés avec une perspective axée sur les données. Chacun de ces axes comporte des analyses de performance approfondies et, malgré la réputation d’opacité des algorithmes d’apprentissage machine profonds, nous proposons des études sur les indices visuels captés par nos méthodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    Efficient intra- and inter-night linking of asteroid detections using kd-trees

    Get PDF
    The Panoramic Survey Telescope And Rapid Response System (Pan-STARRS) under development at the University of Hawaii's Institute for Astronomy is creating the first fully automated end-to-end Moving Object Processing System (MOPS) in the world. It will be capable of identifying detections of moving objects in our solar system and linking those detections within and between nights, attributing those detections to known objects, calculating initial and differentially-corrected orbits for linked detections, precovering detections when they exist, and orbit identification. Here we describe new kd-tree and variable-tree algorithms that allow fast, efficient, scalable linking of intra and inter-night detections. Using a pseudo-realistic simulation of the Pan-STARRS survey strategy incorporating weather, astrometric accuracy and false detections we have achieved nearly 100% efficiency and accuracy for intra-night linking and nearly 100% efficiency for inter-night linking within a lunation. At realistic sky-plane densities for both real and false detections the intra-night linking of detections into `tracks' currently has an accuracy of 0.3%. Successful tests of the MOPS on real source detections from the Spacewatch asteroid survey indicate that the MOPS is capable of identifying asteroids in real data.Comment: Accepted to Icaru
    • …
    corecore