15 research outputs found
How well can people use different color attributes?
Two psychophysical experiments were conducted to analyze the role of color attributes in simple tasks involving color matching and discrimination. In Experiment I observers made color matches using three different adjustment control methods. The results showed that the Lightness, Chroma, Hue (LCH) and the Lightness, redness/greenness, blueness/yellowness ({L, r/g, y/b}) adjustment controls elicited significantly better performance than the display RGB controls in terms of both accuracy and time, but were not significantly different from each other. Expert observers performed significantly better than naive observers in terms of accuracy. Experiment II was a replication and extension of Melgosa, et al.’s experiment in which observers judged differences and similarities for color attributes in pairs of colored patches. At a 95% confidence level, the results from judging difference were significantly better than those from judging similarity. Hue and Lightness were significantly more identifiable than Chroma, r/g, and y/b. For all observers, lightness differences were more easily detected for less chromatic pairs than for higher chromatic ones. With respect to the size of the color differences, it was found that larger hue differences were more easily identifiable than smaller ones. Experts could more readily identify constant lightness and chroma for large color differences while constant hue was more identifiable for small color differences. There were no significant differences found between males and females. These results indicate that people do not have ready access to the lower level color descriptors such as the common attributes used to define color spaces and that higher level psychological processing involving cognition and language may be necessary for even apparently simple tasks involving color matching and describing color differences
Lightness Dependencies and the Effect of Texture on Suprathreshold Lightness Tolerances
A psychophysical experiment was performed to determine the effects of lightness dependency on suprathreshold lightness tolerances. Using a pass/fail method of constant stimuli, lightness tolerance thresholds were measured using achromatic stimuli centered at CIELAB L* = 10, 20, 40, 60, 80, and 90 using 44 observers. In addition to measuring tolerance thresholds for uniform samples, lightness tolerances were measured using stimuli with a simulated texture of thread wound on a card. A texture intermediate between the wound thread and the uniform stimuli was also used. A computer-controlled CRT was used to perform the experiments. Lightness tolerances were found to increase with increasing lightness of the test stimuli. For the uniform stimuli this effect was only evident at the higher lightnesses. For the textured stimuli, this trend was more evident throughout the whole lightness range. Texture had an effect of increasing the tolerance thresholds by a factor of almost 2 as compared to the uniform stimuli. The intermediate texture had tolerance thresholds that were between those of the uniform and full-textured stimuli. Transforming the results into a plot of threshold vs. intensity produced results that were more uniform across the three conditions. This may indicate that CIELAB is not the best space in which to model these effects
Perceptual Display Strategies of Hyperspectral Imagery Based on PCA and ICA
This study investigated appropriate methodologies for displaying hyperspectral imagery based on knowledge of human color vision as applied to Hyperion and AVIRIS data. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were used to reduce the data dimensionality in order to make the data more amenable to visualization in three-dimensional color space. In addition, these two methods were chosen because of their underlying relationships to the opponent color model of human color perception. PCA and ICA-based visualization strategies were then explored by mapping the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCrCb, and YUV. The gray world assumption, which states that given an image with sufficient amount of color variations, the average color should be gray, was used to set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes
Spatio-Velocity CSF as a Function of Retinal Velocity Using Unstabilized Stimuli
LCD televisions have LC response times and hold-type data cycles that contribute to the appearance of blur when objects are in motion on the screen. New algorithms based on studies of the human visual system\u27s sensitivity to motion are being developed to compensate for these artifacts. This paper describes a series of experiments that incorporate eyetracking in the psychophysical determination of spatio-velocity contrast sensitivity in order to build on the 2D spatiovelocity contrast sensitivity function (CSF) model first described by Kelly and later refined by Daly. We explore whether the velocity of the eye has an additional effect on sensitivity and whether the model can be used to predict sensitivity to more complex stimuli. There were a total of five experiments performed in this research. The first four experiments utilized Gabor patterns with three different spatial and temporal frequencies and were used to investigate and/or populate the 2D spatio-velocity CSF. The fifth experiment utilized a disembodied edge and was used to validate the model. All experiments used a two interval forced choice (2IFC) method of constant stimuli guided by a QUEST routine to determine thresholds. The results showed that sensitivity to motion was determined by the retinal velocity produced by the Gabor patterns regardless of the type of motion of the eye. Based on the results of these experiments the parameters for the spatio-velocity CSF model were optimized to our experimental conditions
Visualization of High-dimensional Remote-Sensing Data Products
This study investigated appropriate methodologies for displaying hyperspectral imagery based on knowledge of human color vision as applied to Hyperion and AVIRIS data. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were used to reduce the data dimensionality, and these two methods were chosen also because of their underlying relationships to the opponent color model of human color perception. PCA and ICA-based strategies were then explored by mapping the first three PC or IC to several opponent color spaces including CIELAB, HSV, YCbCr, and YIQ. The gray world assumption, which states that given an image with sufficient amount of color variations, the average color should be gray, was used to set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes. I
Pervasive gaps in Amazonian ecological research
Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear un derstanding of how ecological communities respond to environmental change across time and space.3,4
While the increasing availability of global databases on ecological communities has advanced our knowledge
of biodiversity sensitivity to environmental changes,5–7 vast areas of the tropics remain understudied.8–11 In
the American tropics, Amazonia stands out as the world’s most diverse rainforest and the primary source of
Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepre sented in biodiversity databases.13–15 To worsen this situation, human-induced modifications16,17 may elim inate pieces of the Amazon’s biodiversity puzzle before we can use them to understand how ecological com munities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus
crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced
environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple or ganism groups in a machine learning model framework to map the research probability across the Brazilian
Amazonia, while identifying the region’s vulnerability to environmental change. 15%–18% of the most ne glected areas in ecological research are expected to experience severe climate or land use changes by
2050. This means that unless we take immediate action, we will not be able to establish their current status,
much less monitor how it is changing and what is being lostinfo:eu-repo/semantics/publishedVersio
Pervasive gaps in Amazonian ecological research
Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost
Pervasive gaps in Amazonian ecological research
Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost