21,428 research outputs found

    Scale and Translation Invariant Methods for Enhanced Time-Frequency Pattern Recognition

    Full text link
    Time-frequency (t-f) analysis has clearly reached a certain maturity. One can now often provide striking visual representations of the joint time-frequency energy representation of signals. However, it has been difficult to take advantage of this rich source of information concerning the signal, especially for multidimensional signals. Properly constructed time-frequency distributions enjoy many desirable properties. Attempts to incorporate t-f analysis results into pattern recognition schemes have not been notably successful to date. Aided by Cohen's scale transform one may construct representations from the t-f results which are highly useful in pattern classification. Such methods can produce two dimensional representations which are invariant to time-shift, frequency-shift and scale changes. In addition, two dimensional objects such as images can be represented in a like manner in a four dimensional form. Even so, remaining extraneous variations often defeat the pattern classification approach. This paper presents a method based on noise subspace concepts. The noise subspace enhancement allows one to separate the desired invariant forms from extraneous variations, yielding much improved classification results. Examples from sound classification are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47350/1/11045_2004_Article_181150.pd

    Pigment Melanin: Pattern for Iris Recognition

    Full text link
    Recognition of iris based on Visible Light (VL) imaging is a difficult problem because of the light reflection from the cornea. Nonetheless, pigment melanin provides a rich feature source in VL, unavailable in Near-Infrared (NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical not stimulated in NIR. In this case, a plausible solution to observe such patterns may be provided by an adaptive procedure using a variational technique on the image histogram. To describe the patterns, a shape analysis method is used to derive feature-code for each subject. An important question is how much the melanin patterns, extracted from VL, are independent of iris texture in NIR. With this question in mind, the present investigation proposes fusion of features extracted from NIR and VL to boost the recognition performance. We have collected our own database (UTIRIS) consisting of both NIR and VL images of 158 eyes of 79 individuals. This investigation demonstrates that the proposed algorithm is highly sensitive to the patterns of cromophores and improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on Instruments and Measurements, Volume 59, Issue number 4, April 201

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks

    Full text link
    Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations - small sensor size, compact lenses and the lack of specific hardware, - impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera
    • …
    corecore