3,508 research outputs found

    Estudo experimental do comportamento ocular em trabalhadores administrativos como um indicador de conforto visual em situação de risco de encadeamento

    Get PDF
    The daylight impact on the visual environment is fundamental on visual display terminal work (VDT). Visual performance and visual comfort should be considered for equal. The study (n=16) was performed at the experimental lighting laboratory. Office work with VDT was evaluated using STROOP task in two orientations: (with/without solar presence in the visual field). Our hypothesis states the existence of a relationship between ocular behavior and visual comfort of workers. An eye-tracker was employed in order to record the ocular gestural parameters: blinks, direction of gaze, eye aperture (Degree of eye?s openness) and pupil size, which were correlated with the vertical illuminance at the eye. Visual comfort was assessed with Glare Sensation Vote. Results indicate a strong negative linear correlation between eye illuminance and the degree of eye?s openness in the direct sunlight scenario (p=-0.636; s=0.008) and in diffuse light scenario (p=-0.661; s=0.005), that could be the main predictor of visual discomfort. This experiment allowed us to explore eye behavior patterns that could be visual comfort indices under glare risk situations.O impacto a luz do dia no ambiente visual é fundamental para o trabalho no Ecrã de Visualização de Dados (EDV). Desempenho visual e conforto visual devem ser considerados em igual. O estudo (n = 16) foi realizado no laboratório experimental de iluminação. O trabalho de escritório com EDV foi avaliado utilizando a tarefa de Stroop em duas orientações: (com / sem presença solar no campo visual). A nossa hipótese afirma a existência de uma relação entre o comportamento ocular e conforto visual dos trabalhadores. Um “eye-tracker” foi desenvolvido para gravar os parâmetros gestuais oculares: pestanejar, direção do olhar, abertura dos olhos (Grau de abertura do olho) e tamanho da pupila, que foram correlacionados com a iluminância vertical no olho. Conforto visual foi avaliado com a escala de sensação de encadeamento. Os resultados indicam uma correlação linear negativa forte entre a luminosidade dos olhos e do grau de abertura de olho no cenário de luz solar direta (p = -0,636; s = 0,008) e no cenário de luz difusa (p = -0,661; s = 0,005), que poderia ser o principal preditor de desconforto visual. Esta experiência permitiu-nos explorar padrões de comportamento do olho que poderiam ser os índices de conforto visual em situação de risco de encadeamento.Fil: Yamin Garretón, Julieta Alejandra. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Ciencias Humanas, Sociales y Ambientales; ArgentinaFil: Rodriguez, Roberto Germán. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Ciencias Humanas, Sociales y Ambientales; ArgentinaFil: Pattini, Andrea Elvira. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Ciencias Humanas, Sociales y Ambientales; Argentin

    A new method to determine multi-angular reflectance factor from lightweight multispectral cameras with sky sensor in a target-less workflow applicable to UAV

    Full text link
    A new physically based method to estimate hemispheric-directional reflectance factor (HDRF) from lightweight multispectral cameras that have a downwelling irradiance sensor is presented. It combines radiometry with photogrammetric computer vision to derive geometrically and radiometrically accurate data purely from the images, without requiring reflectance targets or any other additional information apart from the imagery. The sky sensor orientation is initially computed using photogrammetric computer vision and revised with a non-linear regression comprising radiometric and photogrammetry-derived information. It works for both clear sky and overcast conditions. A ground-based test acquisition of a Spectralon target observed from different viewing directions and with different sun positions using a typical multispectral sensor configuration for clear sky and overcast showed that both the overall value and the directionality of the reflectance factor as reported in the literature were well retrieved. An RMSE of 3% for clear sky and up to 5% for overcast sky was observed

    Field research on the spectral properties of crops and soils, volume 1

    Get PDF
    The experiment design, data acquisition and preprocessing, data base management, analysis results and development of instrumentation for the AgRISTARS Supporting Research Project, Field Research task are described. Results of several investigations on the spectral reflectance of corn and soybean canopies as influenced by cultural practices, development stage and nitrogen nutrition are reported as well as results of analyses of the spectral properties of crop canopies as a function of canopy geometry, row orientation, sensor view angle and solar illumination angle are presented. The objectives, experiment designs and data acquired in 1980 for field research experiments are described. The development and performance characteristics of a prototype multiband radiometer, data logger, and aerial tower for field research are discussed

    Vision technology/algorithms for space robotics applications

    Get PDF
    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed

    Probeless Illumination Estimation for Outdoor Augmented Reality

    Get PDF

    CalFUSE v3: A Data-Reduction Pipeline for the Far Ultraviolet Spectroscopic Explorer

    Full text link
    Since its launch in 1999, the Far Ultraviolet Spectroscopic Explorer (FUSE) has made over 4600 observations of some 2500 individual targets. The data are reduced by the Principal Investigator team at the Johns Hopkins University and archived at the Multimission Archive at Space Telescope (MAST). The data-reduction software package, called CalFUSE, has evolved considerably over the lifetime of the mission. The entire FUSE data set has recently been reprocessed with CalFUSE v3.2, the latest version of this software. This paper describes CalFUSE v3.2, the instrument calibrations upon which it is based, and the format of the resulting calibrated data files.Comment: To appear in PASP; 29 pages, 13 figures, uses aastex, emulateap

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une pléthore de tâches, de la composition numérique au ré-éclairage d’une image, en passant par la reconstruction 3D d’objets. Ces tâches permettent aux artistes visuels de réaliser des chef-d’oeuvres ou d’aider des opérateurs à prendre des décisions de façon sécuritaire en fonction de stimulis visuels. Pour beaucoup de ces tâches, les modèles physiques et géométriques que la communauté scientifique a développés donnent lieu à des problèmes mal posés possédant plusieurs solutions, dont généralement une seule est raisonnable. Pour résoudre ces indéterminations, le raisonnement sur le contexte visuel et sémantique d’une scène est habituellement relayé à un artiste ou un expert qui emploie son expérience pour réaliser son travail. Ceci est dû au fait qu’il est généralement nécessaire de raisonner sur la scène de façon globale afin d’obtenir des résultats plausibles et appréciables. Serait-il possible de modéliser l’expérience à partir de données visuelles et d’automatiser en partie ou en totalité ces tâches ? Le sujet de cette thèse est celui-ci : la modélisation d’a priori par apprentissage automatique profond pour permettre la résolution de problèmes typiquement mal posés. Plus spécifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photométrie, 2) l’estimation d’illumination extérieure à partir d’une seule image et 3) l’estimation de calibration de caméra à partir d’une seule image avec un contenu générique. Ces trois sujets seront abordés avec une perspective axée sur les données. Chacun de ces axes comporte des analyses de performance approfondies et, malgré la réputation d’opacité des algorithmes d’apprentissage machine profonds, nous proposons des études sur les indices visuels captés par nos méthodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    Incorporation of shuttle CCT parameters in computer simulation models

    Get PDF
    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported

    The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing

    Full text link
    The Sun Watcher with Active Pixels and Image Processing (SWAP) is an EUV solar telescope on board ESA's Project for Onboard Autonomy 2 (PROBA2) mission launched on 2 November 2009. SWAP has a spectral bandpass centered on 17.4 nm and provides images of the low solar corona over a 54x54 arcmin field-of-view with 3.2 arcsec pixels and an imaging cadence of about two minutes. SWAP is designed to monitor all space-weather-relevant events and features in the low solar corona. Given the limited resources of the PROBA2 microsatellite, the SWAP telescope is designed with various innovative technologies, including an off-axis optical design and a CMOS-APS detector. This article provides reference documentation for users of the SWAP image data.Comment: 26 pages, 9 figures, 1 movi
    corecore