3,025 research outputs found
Reading in the presence of macular disease: a mini-review.
PurposeReading is vital to full participation in modern society. To millions of people suffering from macular disease that results in a central scotoma, reading is difficult and inefficient, rendering reading as the primary goal for most patients seeking low vision rehabilitation. The goals of this review paper are to summarize the dependence of reading speed on several key visual and typographical factors and the current methods or technologies for improving reading performance for people with macular disease.Important findingsIn general, reading speed for people with macular disease depends on print size, text contrast, size of the visual span, temporal processing of letters and oculomotor control. Attempts at improving reading speed by reducing the crowding effect between letters, words or lines; or optimizing properties of typeface such as the presence of serifs or stroke-width thickness proved to be futile, with any improvement being modest at best. Currently, the most promising method to improve reading speed for people with macular disease is training, including perceptual learning or oculomotor training.SummaryThe limitation on reading speed for people with macular disease is likely to be multi-factorial. Future studies should try to understand how different factors interact to limit reading speed, and whether different methods could be combined to produce a much greater benefit
Improving Texture Categorization with Biologically Inspired Filtering
Within the domain of texture classification, a lot of effort has been spent
on local descriptors, leading to many powerful algorithms. However,
preprocessing techniques have received much less attention despite their
important potential for improving the overall classification performance. We
address this question by proposing a novel, simple, yet very powerful
biologically-inspired filtering (BF) which simulates the performance of human
retina. In the proposed approach, given a texture image, after applying a DoG
filter to detect the "edges", we first split the filtered image into two "maps"
alongside the sides of its edges. The feature extraction step is then carried
out on the two "maps" instead of the input image. Our algorithm has several
advantages such as simplicity, robustness to illumination and noise, and
discriminative power. Experimental results on three large texture databases
show that with an extremely low computational cost, the proposed method
improves significantly the performance of many texture classification systems,
notably in noisy environments. The source codes of the proposed algorithm can
be downloaded from https://sites.google.com/site/nsonvu/code.Comment: 11 page
Review of Face Detection Systems Based Artificial Neural Networks Algorithms
Face detection is one of the most relevant applications of image processing
and biometric systems. Artificial neural networks (ANN) have been used in the
field of image processing and pattern recognition. There is lack of literature
surveys which give overview about the studies and researches related to the
using of ANN in face detection. Therefore, this research includes a general
review of face detection studies and systems which based on different ANN
approaches and algorithms. The strengths and limitations of these literature
studies and systems were included also.Comment: 16 pages, 12 figures, 1 table, IJMA Journa
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised Classification
We present a method for automated segmentation of the vasculature in retinal
images. The method produces segmentations by classifying each image pixel as
vessel or non-vessel, based on the pixel's feature vector. Feature vectors are
composed of the pixel's intensity and continuous two-dimensional Morlet wavelet
transform responses taken at multiple scales. The Morlet wavelet is capable of
tuning to specific frequencies, thus allowing noise filtering and vessel
enhancement in a single step. We use a Bayesian classifier with
class-conditional probability density functions (likelihoods) described as
Gaussian mixtures, yielding a fast classification, while being able to model
complex decision surfaces and compare its performance with the linear minimum
squared error classifier. The probability distributions are estimated based on
a training set of labeled pixels obtained from manual segmentations. The
method's performance is evaluated on publicly available DRIVE and STARE
databases of manually labeled non-mydriatic images. On the DRIVE database, it
achieves an area under the receiver operating characteristic (ROC) curve of
0.9598, being slightly superior than that presented by the method of Staal et
al.Comment: 9 pages, 7 figures and 1 table. Accepted for publication in IEEE
Trans Med Imag; added copyright notic
Fuzzy spectral and spatial feature integration for classification of nonferrous materials in hyperspectral data
Hyperspectral data allows the construction of more elaborate models to sample the properties of the nonferrous materials than the standard RGB color representation. In this paper, the nonferrous waste materials are studied as they cannot be sorted by classical procedures due to their color, weight and shape similarities. The experimental results presented in this paper reveal that factors such as the various levels of oxidization of the waste materials and the slight differences in their chemical composition preclude the use of the spectral features in a simplistic manner for robust material classification. To address these problems, the proposed FUSSER (fuzzy spectral and spatial classifier) algorithm detailed in this paper merges the spectral and spatial features to obtain a combined feature vector that is able to better sample the properties of the nonferrous materials than the single pixel spectral features when applied to the construction of multivariate Gaussian distributions. This approach allows the implementation of statistical region merging techniques in order to increase the performance of the classification process. To achieve an efficient implementation, the dimensionality of the hyperspectral data is reduced by constructing bio-inspired spectral fuzzy sets that minimize the amount of redundant information contained in adjacent hyperspectral bands. The experimental results indicate that the proposed algorithm increased the overall classification rate from 44% using RGB data up to 98% when the spectral-spatial features are used for nonferrous material classification
Engineering data compendium. Human perception and performance. User's guide
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
Improved wolf algorithm on document images detection using optimum mean technique
Detection text from handwriting in historical documents provides high-level features for the challenging problem of handwriting recognition. Such handwriting often contains noise, faint or incomplete strokes, strokes with gaps, and competing lines when embedded in a table or form, making it unsuitable for local line following algorithms or associated binarization schemes. In this paper, a proposed method based on the optimum threshold value and namely as the Optimum Mean method was presented. Besides, Wolf method unsuccessful in order to detect the thin text in the non-uniform input image. However, the proposed method was suggested to overcome the Wolf method problem by suggesting a maximum threshold value using optimum mean. Based on the calculation, the proposed method obtained a higher F-measure (74.53), PSNR (14.77) and lowest NRM (0.11) compared to the Wolf method. In conclusion, the proposed method successful and effective to solve the wolf problem by producing a high-quality output image
Light environment - A. Visible light. B. Ultraviolet light
Visible and ultraviolet light environment as related to human performance and safety during space mission
- …