298 research outputs found

    Accurate and robust image superresolution by neural processing of local image representations

    Get PDF
    Image superresolution involves the processing of an image sequence to generate a still image with higher resolution. Classical approaches, such as bayesian MAP methods, require iterative minimization procedures, with high computational costs. Recently, the authors proposed a method to tackle this problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we present a novel superresolution method, based on an evolution of this concept, to incorporate the use of local image models. A neural processing stage receives as input the value of model coefficients on local windows. The data dimension-ality is firstly reduced by application of PCA. An MLP, trained on synthetic se-quences with various amounts of noise, estimates the high-resolution image data. The effect of varying the dimension of the network input space is exam-ined, showing a complex, structured behavior. Quantitative results are presented showing the accuracy and robustness of the proposed method

    Analysis of eigendecomposition for sets of correlated images at different resolutions

    Get PDF
    Includes bibliographical references.Eigendecomposition is a common technique that is performed on sets of correlated images in a number of computer vision and robotics applications. Unfortunately, the computation of an eigendecomposition can become prohibitively expensive when dealing with very high resolution images. While reducing the resolution of the images will reduce the computational expense, it is not known how this will affect the quality of the resulting eigendecomposition. The work presented here gives the theoretical background for quantifying the effects of varying the resolution of images on the eigendecomposition that is computed from those images. A computationally efficient algorithm for this eigendecomposition is proposed using derived analytical expressions. Examples show that this algorithm performs very well on arbitrary video sequences.This work was supported by the National Imagery and Mapping Agency under contract no. NMA201-00-1-1003 and through collaborative participation in the Robotics Consortium sponsored by the U. S. Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD19-01-2-0012

    Quadtree-based eigendecomposition for pose estimation in the presence of occlusion and background clutter

    Get PDF
    Includes bibliographical references (pages 29-30).Eigendecomposition-based techniques are popular for a number of computer vision problems, e.g., object and pose estimation, because they are purely appearance based and they require few on-line computations. Unfortunately, they also typically require an unobstructed view of the object whose pose is being detected. The presence of occlusion and background clutter precludes the use of the normalizations that are typically applied and significantly alters the appearance of the object under detection. This work presents an algorithm that is based on applying eigendecomposition to a quadtree representation of the image dataset used to describe the appearance of an object. This allows decisions concerning the pose of an object to be based on only those portions of the image in which the algorithm has determined that the object is not occluded. The accuracy and computational efficiency of the proposed approach is evaluated on 16 different objects with up to 50% of the object being occluded and on images of ships in a dockyard

    A Font Search Engine for Large Font Databases

    Get PDF
    A search engine for font recognition is presented and evaluated. The intended usage is the search in very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when rendering the text. After pre-processing and segmentation of the input image, a local approach is used, where features are calculated for individual characters. The method is based on eigenimages calculated from edge filtered character images, which enables compact feature vectors that can be computed rapidly. In this study the database contains 2763 different fonts for the English alphabet. To resemble a real life situation, the proposed method is evaluated with printed and scanned text lines and character images. Our evaluation shows that for 99.1% of the queries, the correct font name can be found within the five best matches

    Clinical Adoption of CAD: Exploration of the Barriers to Translation through an Example Application

    Get PDF
    Computer aided diagnosis (CAD) software is not yet widely used in clinic. This paper aims to identify possible reasons why. Firstly, the technical maturity of CAD is explored through analysis of diagnostic accuracy metrics in one example application, the automated classification of Ioflupane I123 (DaTSCAN) images. Software is developed for image classification based on well- established eigenimage techniques. Using a publicly available database of images an area under the Receiver Operator Curve (AUROC) of 0.980 is achieved. Given these impressive results the main blockage to clinical adoption, both in DaTSCAN classification and potentially in other applications, is likely to relate to wider issues. These are explored with reference to the demands of the National Institute for Health and Care Excellence (NICE) evaluation processes. It is postulated that in order to enable wider adoption a greater focus on proving the safety, efficacy and cost effectiveness of CAD may be required

    Vectorizing Face Images by Interpreting Shape and Texture Computations

    Get PDF
    The correspondence problem in computer vision is basically a matching task between two or more sets of features. In this paper, we introduce a vectorized image representation, which is a feature-based representation where correspondence has been established with respect to a reference image. This representation has two components: (1) shape, or (x, y) feature locations, and (2) texture, defined as the image grey levels mapped onto the standard reference image. This paper explores an automatic technique for "vectorizing" face images. Our face vectorizer alternates back and forth between computation steps for shape and texture, and a key idea is to structure the two computations so that each one uses the output of the other. A hierarchical coarse-to-fine implementation is discussed, and applications are presented to the problems of facial feature detection and registration of two arbitrary faces

    Bright lesion detection in retinal images

    Get PDF
    Master'sMASTER OF SCIENC
    corecore