303,876 research outputs found
Recommended from our members
Regional Visibility Statistics in the United States: Natural and Transboundary Pollution Influences, and Implications for the Regional Haze Rule
The Regional Haze Rule of the US Environmental Protection Agency mandates reduction in US anthropogenic emissions to achieve linear improvement of visibility in wilderness areas over the 2004–18 period toward an endpoint of natural visibility conditions by 2064. Linear improvement is to apply to the mean visibility degradation on the statistically 20% worst days, measured as a Haze Index in units of deciviews (log of aerosol extinction). We use a global chemical transport model (GEOS-Chem) with 1°×1° horizontal resolution to simulate present-day visibility statistics in the USA, compare them to observations from the Interagency Monitoring of Protected Visual Environments (IMPROVE) surface network, and provide natural and background visibility statistics for application of the Regional Haze Rule. Background is defined by suppression of US anthropogenic emissions but allowance for present-day foreign emissions and associated import of pollution. Our model is highly successful at reproducing the observed variability of visibility statistics for present-day conditions, including the low tail of the frequency distribution that is most representative of natural or background conditions. We find considerable spatial and temporal variability in natural visibility over the USA, especially due to fires in the west. A major uncertainty in estimating natural visibility is the sensitivity of biogenic organic aerosol formation to the availability of preexisting anthropogenic aerosol. Background visibility is more variable than natural visibility and the 20% worst days show large contributions from Canadian and Mexican pollution. Asian pollution, while degrading mean background visibility, is relatively less important on the worst days. Recognizing the influence of uncontrollable transboundary pollution in the Regional Haze Rule would substantially decrease the schedule of emission reductions required in the 2004–18 implementation phase. Meaningful application of the Rule in the future will require projections of future trends in foreign anthropogenic emissions, wildfire frequency, and climate variablesEarth and Planetary SciencesEngineering and Applied Science
Electricity deregulation and the valuation of visibility loss in wilderness areas: A research note.
Visibility in most wilderness areas in the northeastern United States has declined substantially since the 1970s. As noted by Hill et al. (2000), despite the 1977 Clean Air Act and subsequent amendments, human induced smog conditions are becoming increasingly worse. Average visibility in class I airsheds, such as the Great Gulf Wilderness in New Hampshire’s White Mountains, is now about one-third of natural conditions. A particular concern is that deregulation of electricity production could result in further degradation because consumers may switch to lower cost fossil fuel generation (Harper 2000). To the extent that this system reduces electricity costs, it may also affect firm location decisions (Halstead and Deller 1997). Yet, little is known about the extent to which consumers are likely to make tradeoffs between electric bills and reduced visibility in nearby wilderness areas. This applied research uses a contingent valuation approach in an empirical case study of consumers’ tradeoffs between cheaper electric bills and reduced visibility in New Hampshire’s White Mountains. We also examine some of the problems associated with uncertainty with this type of analysis; that is, how confident respondents are in their answers to the valuation questions. Finally, policy implications of decreased visibility due to electricity deregulation are discussed
Contrastive Learning for Lane Detection via Cross-Similarity
Detecting road lanes is challenging due to intricate markings vulnerable to
unfavorable conditions. Lane markings have strong shape priors, but their
visibility is easily compromised. Factors like lighting, weather, vehicles,
pedestrians, and aging colors challenge the detection. A large amount of data
is required to train a lane detection approach that can withstand natural
variations caused by low visibility. This is because there are numerous lane
shapes and natural variations that exist. Our solution, Contrastive Learning
for Lane Detection via cross-similarity (CLLD), is a self-supervised learning
method that tackles this challenge by enhancing lane detection models
resilience to real-world conditions that cause lane low visibility. CLLD is a
novel multitask contrastive learning that trains lane detection approaches to
detect lane markings even in low visible situations by integrating local
feature contrastive learning (CL) with our new proposed operation
cross-similarity. Local feature CL focuses on extracting features for small
image parts, which is necessary to localize lane segments, while
cross-similarity captures global features to detect obscured lane segments
using their surrounding. We enhance cross-similarity by randomly masking parts
of input images for augmentation. Evaluated on benchmark datasets, CLLD
outperforms state-of-the-art contrastive learning, especially in
visibility-impairing conditions like shadows. Compared to supervised learning,
CLLD excels in scenarios like shadows and crowded scenes.Comment: 10 page
Daylighting Performance of Solar Control Films for Hospital Buildings in a Mediterranean Climate
One of the main retrofitting strategies in warm climates is the reduction of the effects
of solar radiation. Cooling loads, and in turn, cooling consumption, can be reduced through the
implementation of reflective materials such as solar control films. However, these devices may
also negatively affect daylight illuminance conditions and the electric consumption of artificial
lighting systems. In a hospital building, it is crucial to meet daylighting requirements as well as
indoor illuminance levels and visibility from the inside, as these have a significant impact on health
outcomes. The aim of this paper is to evaluate the influence on natural illuminance conditions
of a solar control film installed on the windows of a public hospital building in a Mediterranean
climate. To this end, a hospital room, with and without solar film, was monitored for a whole year.
A descriptive statistical analysis was conducted on the use of artificial lighting, illuminance levels and
rolling shutter aperture levels, as well as an analysis of natural illuminance and electric consumption
of the artificial lighting system. The addition of a solar control film to the external surface of the
window, in combination with the user-controlled rolling shutter aperture levels, has reduced the
electric consumption of the artificial lighting system by 12.2%. Likewise, the solar control film has
increased the percentage of annual hours with natural illuminance levels by 100–300 lux
Recommended from our members
Visibility metrics and their applications in visually lossless image compression
Visibility metrics are image metrics that predict the probability that a human observer can detect differences between a pair of images. These metrics can provide localized information in the form of visibility maps, in which each value represents a probability of detection. An important application of the visibility metric is visually lossless image compression that aims at compressing a given image to the lowest fraction of bit per pixel while keeping the compression artifacts invisible at the same time.
In previous works, most visibility metrics were modeled based on largely simplified assumptions and mathematical models of human visual systems. This approach generally fits well into experimental data measured with simple stimuli, such as Gabor patches. However, it cannot predict complex non-linear effects, such as contrast masking in natural images, particularly well. To predict visibility of image differences accurately, we collected the largest visibility dataset under fixed viewing conditions for calibrating existing visibility metrics and proposed a deep neural network-based visibility metric. We demonstrated in our experiments that the deep neural network-based visibility metric significantly outperformed existing visibility metrics.
However, the deep neural network-based visibility metric cannot predict visibility under varying viewing conditions, such as display brightness and viewing distances that have great impacts on the visibility of distortions. To extend the deep neural network-based visibility metric to varying viewing conditions, we collected the largest visibility dataset under varying display brightness and viewing distances. We proposed incorporating white-box modules, in other words, luminance masking and viewing distance adaptation, into the black-box deep neural network, and we found that the combination of white-box modules and black-box deep neural networks could generalize our proposed visibility metric to varying viewing conditions.
To demonstrate the application of our proposed deep neural network-based visibility metric to visually lossless image compression, we collected the visually lossless image compression dataset under fixed viewing conditions and significantly improved the deep neural network-based visibility metric's accuracy of predicting visually lossless image compression threshold by pre-training the visibility metric with a synthetic dataset generated by the state-of-the-art white-box visibility metric---HDR-VDP \cite{Mantiuk2011}. In a large-scale study of 1000 images, we found that with our improved visibility metric, we can save around 60\% to 70\% bits for visually lossless image compression encoding as compared to the default visually lossless quality level of 90.
Because predicting image visibility and predicting image quality are closely related research topics, we also proposed a trained perceptually uniform transform for high dynamic range images and videos quality assessments by training a perceptual encoding function on a set of subjective quality assessment datasets. We have shown that when combining the trained perceptual encoding function with standard dynamic range image quality metrics, such as peak-signal-noise-ratio (PSNR), better performance was achieved compared to the untrained version
Deep visible and thermal image fusion for enhanced pedestrian visibility
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics
Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision
Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors
Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the lateral confines of the runway for all conditions tested. The fail-operational concept with pilot in the loop needs further study
- …