8 research outputs found

    Machine Learning Methods for Automatic Silent Speech Recognition Using a Wearable Graphene Strain Gauge Sensor.

    Get PDF
    Silent speech recognition is the ability to recognise intended speech without audio information. Useful applications can be found in situations where sound waves are not produced or cannot be heard. Examples include speakers with physical voice impairments or environments in which audio transference is not reliable or secure. Developing a device which can detect non-auditory signals and map them to intended phonation could be used to develop a device to assist in such situations. In this work, we propose a graphene-based strain gauge sensor which can be worn on the throat and detect small muscle movements and vibrations. Machine learning algorithms then decode the non-audio signals and create a prediction on intended speech. The proposed strain gauge sensor is highly wearable, utilising graphene's unique and beneficial properties including strength, flexibility and high conductivity. A highly flexible and wearable sensor able to pick up small throat movements is fabricated by screen printing graphene onto lycra fabric. A framework for interpreting this information is proposed which explores the use of several machine learning techniques to predict intended words from the signals. A dataset of 15 unique words and four movements, each with 20 repetitions, was developed and used for the training of the machine learning algorithms. The results demonstrate the ability for such sensors to be able to predict spoken words. We produced a word accuracy rate of 55% on the word dataset and 85% on the movements dataset. This work demonstrates a proof-of-concept for the viability of combining a highly wearable graphene strain gauge and machine leaning methods to automate silent speech recognition.EP/S023046/

    Roadmap on printable electronic materials for next-generation sensors

    Get PDF
    The dissemination of sensors is key to realizing a sustainable, ‘intelligent’ world, where everyday objects and environments are equipped with sensing capabilities to advance the sustainability and quality of our lives—e.g., via smart homes, smart cities, smart healthcare, smart logistics, Industry 4.0, and precision agriculture. The realization of the full potential of these applications critically depends on the availability of easy-to-make, low-cost sensor technologies. Sensors based on printable electronic materials offer the ideal platform: they can be fabricated through simple methods (e.g., printing and coating) and are compatible with high-throughput roll-to-roll processing. Moreover, printable electronic materials often allow the fabrication of sensors on flexible/stretchable/biodegradable substrates, thereby enabling the deployment of sensors in unconventional settings. Fulfilling the promise of printable electronic materials for sensing will require materials and device innovations to enhance their ability to transduce external stimuli—light, ionizing radiation, pressure, strain, force, temperature, gas, vapours, humidity, and other chemical and biological analytes. This Roadmap brings together the viewpoints of experts in various printable sensing materials—and devices thereof—to provide insights into the status and outlook of the field. Alongside recent materials and device innovations, the roadmap discusses the key outstanding challenges pertaining to each printable sensing technology. Finally, the Roadmap points to promising directions to overcome these challenges and thus enable ubiquitous sensing for a sustainable, ‘intelligent’ world

    Amd classification in choroidal oct using hierarchical texton mining

    Get PDF
    In this paper, we propose a multi-step textural feature extraction and classification method, which utilizes the feature learning ability of Convolutional Neural Networks (CNN) to extract a set of low level primitive filter kernels, extracts spatial information using clustering and Local Binary Patterns (LBP) and then generalizes the discriminative power by forming a histogram based descriptor. It integrates the concept of hierarchical texton mining and data driven kernel learning into a uniform framework. The proposed method is applied to a practical medical diagnosis problem of classifying different stages of Age-Related Macular Degeneration (AMD) using a dataset comprising long-wavelength Optical Coherence Tomography (OCT) images of the choroid. The results demonstrate the feasibility of our method for classifying different AMD stages using the textural information of the choroidal region

    Learning feature extractors for AMD classification in OCT using convolutional neural networks

    Get PDF
    In this paper, we propose a two-step textural feature extraction method, which utilizes the feature learning ability of Convolutional Neural Networks (CNN) to extract a set of low level primitive filter kernels, and then generalizes the discriminative power by forming a histogram based descriptor. The proposed method is applied to a practical medical diagnosis problem of classifying different stages of Age-Related Macular Degeneration (AMD) using a dataset comprising long-wavelength Optical Coherence Tomography (OCT) images of the choroid. The experimental results show that the proposed method extracts more discriminative features than the features learnt through CNN only. It also suggests the feasibility of classifying different AMD stages using the textural information of the choroid region
    corecore