1,365 research outputs found

    Colour Text Segmentation in Web Images Based on Human Perception

    No full text
    There is a significant need to extract and analyse the text in images on Web documents, for effective indexing, semantic analysis and even presentation by non-visual means (e.g., audio). This paper argues that the challenging segmentation stage for such images benefits from a human perspective of colour perception in preference to RGB colour space analysis. The proposed approach enables the segmentation of text in complex situations such as in the presence of varying colour and texture (characters and background). More precisely, characters are segmented as distinct regions with separate chromaticity and/or lightness by performing a layer decomposition of the image. The method described here is a result of the authors’ systematic approach to approximate the human colour perception characteristics for the identification of character regions. In this instance, the image is decomposed by performing histogram analysis of Hue and Lightness in the HLS colour space and merging using information on human discrimination of wavelength and luminance

    Potential of wind turbines to elicit seizures under various meteorological conditions

    Get PDF
    Purpose: To determine the potential risk of epileptic seizures from wind turbine shadow flicker under various meteorologic conditions. Methods: We extend a previous model to include attenuation of sunlight by the atmosphere using the libradtran radiative transfer code. Results: Under conditions in which observers look toward the horizon with their eyes open we find that there is risk when the observer is closer than 1.2 times the total turbine height when on land, and 2.8 times the total turbine height in marine environments, the risk limited by the size of the image of the sun's disc on the retina. When looking at the ground, where the shadow of the blade is cast, observers are at risk only when at a distance <36 times the blade width, the risk limited by image contrast. If the observer views the horizon and closes their eyes, however, the stimulus size and contrast ratio are epileptogenic for solar elevation angles down to approximately 5°. Discussion: Large turbines rotate at a rate below that at which the flicker is likely to present a risk, although there is a risk from smaller turbines that interrupt sunlight more than three times per second. For the scenarios considered, we find the risk is negligible at a distance more than about nine times the maximum height reached by the turbine blade, a distance similar to that in guidance from the United Kingdom planning authorities. © 2009 International League Against Epilepsy

    Multidimensional scaling reveals a color dimension unique to 'color-deficient' observers

    Get PDF
    Normal color vision depends on the relative rates at which photons are absorbed in three types of retinal cone:short-wave (S), middle-wave (M) and long-wave (L) cones, maximally sensitive near 430, 530 and 560nm, respectively. But 6% of men exhibit an X-linked variant form of color vision called deuteranomaly [1]. Their color vision is thought to depend on S cones and two forms of long-wave cone (L, L′) [2,3]. The two types of L cone contain photopigments that are maximally sensitive near 560nm, but their spectral sensitivities are different enough that the ratio of their activations gives a useful chromatic signal

    Colour displays for categorical images

    Get PDF
    We propose a method for identifying a set of colours for displaying 2-D and 3-D categorical images when the categories are unordered labels. The principle is to find maximally distinct sets of colours. We either generate colours sequentially, to maximise the dissimilarity or distance between a new colour and the set of colours already chosen, or use a simulated annealing algorithm to find a set of colours of specified size. In both cases, we use a Euclidean metric on the perceptual colour space, CIE-LAB, to specify distances

    Lights, Camera, Action! Exploring Effects of Visual Distractions on Completion of Security Tasks

    Full text link
    Human errors in performing security-critical tasks are typically blamed on the complexity of those tasks. However, such errors can also occur because of (possibly unexpected) sensory distractions. A sensory distraction that produces negative effects can be abused by the adversary that controls the environment. Meanwhile, a distraction with positive effects can be artificially introduced to improve user performance. The goal of this work is to explore the effects of visual stimuli on the performance of security-critical tasks. To this end, we experimented with a large number of subjects who were exposed to a range of unexpected visual stimuli while attempting to perform Bluetooth Pairing. Our results clearly demonstrate substantially increased task completion times and markedly lower task success rates. These negative effects are noteworthy, especially, when contrasted with prior results on audio distractions which had positive effects on performance of similar tasks. Experiments were conducted in a novel (fully automated and completely unattended) experimental environment. This yielded more uniform experiments, better scalability and significantly lower financial and logistical burdens. We discuss this experience, including benefits and limitations of the unattended automated experiment paradigm

    Modern technologies production of cheese enriched with Omega - 3 fatty acids

    Get PDF
    Thermochromic films of MgxV1-xO2 were made by reactive dc magnetron   sputtering onto heated glass. The metal-insulator transition   temperature decreased by similar to 3 K/at. %Mg, while the optical   transmittance increased concomitantly. Specifically, the transmittance   of visible light and of solar radiation was enhanced by similar to 10%   when the Mg content was similar to 7 at. %. Our results point at the   usefulness of these films for energy efficient fenestration

    A comparative evaluation of interactive segmentation algorithms

    Get PDF
    In this paper we present a comparative evaluation of four popular interactive segmentation algorithms. The evaluation was carried out as a series of user-experiments, in which participants were tasked with extracting 100 objects from a common dataset: 25 with each algorithm, constrained within a time limit of 2 min for each object. To facilitate the experiments, a “scribble-driven” segmentation tool was developed to enable interactive image segmentation by simply marking areas of foreground and background with the mouse. As the participants refined and improved their respective segmentations, the corresponding updated segmentation mask was stored along with the elapsed time. We then collected and evaluated each recorded mask against a manually segmented ground truth, thus allowing us to gauge segmentation accuracy over time. Two benchmarks were used for the evaluation: the well-known Jaccard index for measuring object accuracy, and a new fuzzy metric, proposed in this paper, designed for measuring boundary accuracy. Analysis of the experimental results demonstrates the effectiveness of the suggested measures and provides valuable insights into the performance and characteristics of the evaluated algorithms
    corecore