66 research outputs found

    Alternative visual units for an optimized phoneme-based lipreading system

    Get PDF
    Lipreading is understanding speech from observed lip movements. An observed series of lip motions is an ordered sequence of visual lip gestures. These gestures are commonly known, but as yet are not formally defined, as `visemes’. In this article, we describe a structured approach which allows us to create speaker-dependent visemes with a fixed number of visemes within each set. We create sets of visemes for sizes two to 45. Each set of visemes is based upon clustering phonemes, thus each set has a unique phoneme-to-viseme mapping. We first present an experiment using these maps and the Resource Management Audio-Visual (RMAV) dataset which shows the effect of changing the viseme map size in speaker-dependent machine lipreading and demonstrate that word recognition with phoneme classifiers is possible. Furthermore, we show that there are intermediate units between visemes and phonemes which are better still. Second, we present a novel two-pass training scheme for phoneme classifiers. This approach uses our new intermediary visual units from our first experiment in the first pass as classifiers; before using the phoneme-to-viseme maps, we retrain these into phoneme classifiers. This method significantly improves on previous lipreading results with RMAV speakers

    RELLISUR: A Real Low-Light Image Super-Resolution Dataset

    Get PDF
    The RELLISUR dataset contains real low-light low-resolution images paired with normal-light high-resolution reference image counterparts. This dataset aims to fill the gap between low-light image enhancement and low-resolution image enhancement (Super-Resolution (SR)) which is currently only being addressed separately in the literature, even though the visibility of real-world images is often limited by both low-light and low-resolution. The dataset contains 12750 paired images of different resolutions and degrees of low-light illumination, to facilitate learning of deep-learning based models that can perform a direct mapping from degraded images with low visibility to high-quality detail rich images of high resolution

    Deep Mean-Shift Priors for Image Restoration

    Full text link
    In this paper we introduce a natural image prior that directly represents a Gaussian-smoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing.Comment: NIPS 201

    Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine

    Get PDF
    In this paper, a deep learning approach, Restricted Boltzmann Machine (RBM), is used to perform automatic hand sign language recognition from visual data. We evaluate how RBM, as a deep generative model, is capable of generating the distribution of the input data for an enhanced recognition of unseen data. Two modalities, RGB and Depth, are considered in the model input in three forms: original image, cropped image, and noisy cropped image. Five crops of the input image are used and the hand of these cropped images are detected using Convolutional Neural Network (CNN). After that, three types of the detected hand images are generated for each modality and input to RBMs. The outputs of the RBMs for two modalities are fused in another RBM in order to recognize the output sign label of the input image. The proposed multi-modal model is trained on all and part of the American alphabet and digits of four publicly available datasets. We also evaluate the robustness of the proposal against noise. Experimental results show that the proposed multi-modal model, using crops and the RBM fusing methodology, achieves state-of-the-art results on Massey University Gesture Dataset 2012, American Sign Language (ASL). and Fingerspelling Dataset from the University of Surrey's Center for Vision, Speech and Signal Processing, NYU, and ASL Fingerspelling A datasets

    Random Forest with Adaptive Local Template for Pedestrian Detection

    Get PDF
    Pedestrian detection with large intraclass variations is still a challenging task in computer vision. In this paper, we propose a novel pedestrian detection method based on Random Forest. Firstly, we generate a few local templates with different sizes and different locations in positive exemplars. Then, the Random Forest is built whose splitting functions are optimized by maximizing class purity of matching the local templates to the training samples, respectively. To improve the classification accuracy, we adopt a boosting-like algorithm to update the weights of the training samples in a layer-wise fashion. During detection, the trained Random Forest will vote the category when a sliding window is input. Our contributions are the splitting functions based on local template matching with adaptive size and location and iteratively weight updating method. We evaluate the proposed method on 2 well-known challenging datasets: TUD pedestrians and INRIA pedestrians. The experimental results demonstrate that our method achieves state-of-the-art or competitive performance

    Between images and built form: Automating the recognition of standardised building components using deep learning

    Get PDF
    Building on the richness of recent contributions in the field, this paper presents a state-of-the-art CNN analysis method for automatingthe recognition of standardised building components in modern heritage buildings. At the turn of the twentieth century manufacturedbuilding components became widely advertised for specification by architects. Consequently, a form of standardisation across varioustypologies began to take place. During this era of rapid economic and industrialised growth, many forms of public building wereerected. This paper seeks to demonstrate a method for informing the recognition of such elements using deep learning to recognise'families' of elements across a range of buildings in order to retrieve and recognise their technical specifications from the contemporarytrade literature. The method is illustrated through the case of Carnegie Public Libraries in the UK, which provides a unique butubiquitous platform from which to explore the potential for the automated recognition of manufactured standard architecturalcomponents. The aim of enhancing this knowledge base is to use the degree to which these were standardised originally as a means toinform and so support their ongoing care but also that of many other contemporary buildings. Although these libraries are numerous,they are maintained at a local level and as such, their shared challenges for maintenance remain unknown to one another. Additionally,this paper presents a methodology to indirectly retrieve useful indicators and semantics, relating to emerging HBIM families, byapplying deep learning to a varied range of architectural imagery
    • …
    corecore