366 research outputs found

    Adaptive restoration of text images containing touching and broken characters

    Full text link
    For document processing systems, automated data entry is generally performed by optical character recognition (OCR) systems. To make these systems practical, reliable OCR systems are essential. However, distortions in document images cause character recognition errors, thereby, reducing the accuracy of OCR systems. In document images, most OCR errors are caused by broken and touching characters. This thesis presents an adaptive system to restore text images distorted by touching and broken characters. The adaptive system uses the distorted text image and the output from an OCR system to generate the training character image. Using the training image and the distorted image, the system trains an adaptive restoration filter and then uses the trained filter to restore the distorted text image. To demonstrate the performance of this technique, it was applied to several distorted images containing touching or broken characters. The results show that this technique can improve both pixel and OCR accuracy of distorted text images containing touching or broken characters

    Modeling Light-Extraction Characteristics of Packaged Light-Emitting Diodes

    Get PDF
    We employ a Monte Carlo ray-tracing technique to model light-extraction characteristics of light-emitting diodes. By relaxing restrictive assumptions on photon traversal history, our method improves upon available analytical models for estimating light-extraction efficiencies from bare LED chips, and enhances modeling capabilities by realistically treating the various processes which photons can encounter in a packaged LED. Our method is not only capable of calculating extraction efficiencies, but can also provide extensive statistical information on photon extraction processes, and predict LED spatial emission characteristics

    On the Computation of the Kullback-Leibler Measure for Spectral Distances

    Get PDF
    Efficient algorithms for the exact and approximate computation of the symmetrical Kullback-Leibler (1998) measure for spectral distances are presented for linear predictive coding (LPC) spectra. A interpretation of this measure is given in terms of the poles of the spectra. The performances of the algorithms in terms of accuracy and computational complexity are assessed for the application of computing concatenation costs in unit-selection-based speech synthesis. With the same complexity and storage requirements, the exact method is superior in terms of accuracy

    Adaptive restoration of speckled SAR images

    Full text link

    Impact Ionization and Hot-Electron Injection Derived Consistently from Boltzmann Transport

    Get PDF
    We develop a quantitative model of the impact-ionizationand hot-electron–injection processes in MOS devices from first principles. We begin by modeling hot-electron transport in the drain-to-channel depletion region using the spatially varying Boltzmann transport equation, and we analytically find a self consistent distribution function in a two step process. From the electron distribution function, we calculate the probabilities of impact ionization and hot-electron injection as functions of channel current, drain voltage, and floating-gate voltage. We compare our analytical model results to measurements in long-channel devices. The model simultaneously fits both the hot-electron- injection and impact-ionization data. These analytical results yield an energydependent impact-ionization collision rate that is consistent with numerically calculated collision rates reported in the literature

    New adaptive iterative image restoration algorithm

    Get PDF
    Version of RecordPublishe

    Likelihood-Ratio-Based Biometric Verification

    Get PDF
    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal. Second, we show that, under some general conditions, decisions based on posterior probabilities and likelihood ratios are equivalent and result in the same receiver operating curve. However, in a multi-user situation, these two methods lead to different average error rates. As a third result, we prove theoretically that, for multi-user verification, the use of the likelihood ratio is optimal in terms of average error rates. The superiority of this method is illustrated by experiments in fingerprint verification. It is shown that error rates below 10/sup -3/ can be achieved when using multiple fingerprints for template construction
    corecore