8 research outputs found
Machine Learning Methods for Automatic Silent Speech Recognition Using a Wearable Graphene Strain Gauge Sensor.
Silent speech recognition is the ability to recognise intended speech without audio information. Useful applications can be found in situations where sound waves are not produced or cannot be heard. Examples include speakers with physical voice impairments or environments in which audio transference is not reliable or secure. Developing a device which can detect non-auditory signals and map them to intended phonation could be used to develop a device to assist in such situations. In this work, we propose a graphene-based strain gauge sensor which can be worn on the throat and detect small muscle movements and vibrations. Machine learning algorithms then decode the non-audio signals and create a prediction on intended speech. The proposed strain gauge sensor is highly wearable, utilising graphene's unique and beneficial properties including strength, flexibility and high conductivity. A highly flexible and wearable sensor able to pick up small throat movements is fabricated by screen printing graphene onto lycra fabric. A framework for interpreting this information is proposed which explores the use of several machine learning techniques to predict intended words from the signals. A dataset of 15 unique words and four movements, each with 20 repetitions, was developed and used for the training of the machine learning algorithms. The results demonstrate the ability for such sensors to be able to predict spoken words. We produced a word accuracy rate of 55% on the word dataset and 85% on the movements dataset. This work demonstrates a proof-of-concept for the viability of combining a highly wearable graphene strain gauge and machine leaning methods to automate silent speech recognition.EP/S023046/
Roadmap on printable electronic materials for next-generation sensors
The dissemination of sensors is key to realizing a sustainable, ‘intelligent’ world, where everyday objects and environments are equipped with sensing capabilities to advance the sustainability and quality of our lives—e.g., via smart homes, smart cities, smart healthcare, smart logistics, Industry 4.0, and precision agriculture. The realization of the full potential of these applications critically depends on the availability of easy-to-make, low-cost sensor technologies. Sensors based on printable electronic materials offer the ideal platform: they can be fabricated through simple methods (e.g., printing and coating) and are compatible with high-throughput roll-to-roll processing. Moreover, printable electronic materials often allow the fabrication of sensors on flexible/stretchable/biodegradable substrates, thereby enabling the deployment of sensors in unconventional settings. Fulfilling the promise of printable electronic materials for sensing will require materials and device innovations to enhance their ability to transduce external stimuli—light, ionizing radiation, pressure, strain, force, temperature, gas, vapours, humidity, and other chemical and biological analytes. This Roadmap brings together the viewpoints of experts in various printable sensing materials—and devices thereof—to provide insights into the status and outlook of the field. Alongside recent materials and device innovations, the roadmap discusses the key outstanding challenges pertaining to each printable sensing technology. Finally, the Roadmap points to promising directions to overcome these challenges and thus enable ubiquitous sensing for a sustainable, ‘intelligent’ world
Amd classification in choroidal oct using hierarchical texton mining
In this paper, we propose a multi-step textural feature extraction and classification method, which utilizes the feature learning ability of Convolutional Neural Networks (CNN) to extract a set of low level primitive filter kernels, extracts spatial information using clustering and Local Binary Patterns (LBP) and then generalizes the discriminative power by forming a histogram based descriptor. It integrates the concept of hierarchical texton mining and data driven kernel learning into a uniform framework. The proposed method is applied to a practical medical diagnosis problem of classifying different stages of Age-Related Macular Degeneration (AMD) using a dataset comprising long-wavelength Optical Coherence Tomography (OCT) images of the choroid. The results demonstrate the feasibility of our method for classifying different AMD stages using the textural information of the choroidal region
Learning feature extractors for AMD classification in OCT using convolutional neural networks
In this paper, we propose a two-step textural feature extraction method, which utilizes the feature learning ability of Convolutional Neural Networks (CNN) to extract a set of low level primitive filter kernels, and then generalizes the discriminative power by forming a histogram based descriptor. The proposed method is applied to a practical medical diagnosis problem of classifying different stages of Age-Related Macular Degeneration (AMD) using a dataset comprising long-wavelength Optical Coherence Tomography (OCT) images of the choroid. The experimental results show that the proposed method extracts more discriminative features than the features learnt through CNN only. It also suggests the feasibility of classifying different AMD stages using the textural information of the choroid region
Recommended from our members
Ultrasensitive Textile Strain Sensors Redefine Wearable Silent Speech Interfaces with High Machine Learning Efficiency
This work introduces a silent speech interface (SSI), proposing a few-layers graphene (FLG) strain sensing mechanism based on thorough cracks, and AI-based self-adaptation capabilities which overcomes the limitations of state-of-the-art technologies by simultaneously achieving high accuracy, high computational efficiency, and fast decoding speed, while maintaining excellent user comfort. We demonstrate its application in a biocompatible textile-integrated ultrasensitive strain sensor embedded into a smart choker, which conforms to the user’s throat. Thanks to the structure of ordered thorough cracks in the graphene-coated textile, the proposed strain gauge achieves a gauge factor of 317 with less than 5% strain, corresponding to a 420% improvement over existing state-of-the-art technologies reported to date. Its high sensitivity allows it to capture subtle throat movements, simplifying signal processing and enabling the use of a computationally efficient neural network. The resulting neural network, based on a one-dimensional convolutional model, reduces computational load by 90% while maintaining a remarkable 95.25% accuracy in speech decoding. The synergy in sensor design and neural network optimization offers a promising solution for practical, wearable SSI systems, paving the way for seamless, natural silent communication in diverse settings
Recommended from our members
Ultrasensitive textile strain sensors redefine wearable silent speech interfaces with high machine learning efficiency
Acknowledgements: C.T. was supported by Endoenergy Systems (grant no. G119004), M.X. was supported by CSC-Cambridge International Scholarship, W.Y. was supported by Pragmatic Semiconductor (grant no. G117793), E.O. was supported by UKRI Centre for Doctoral Training in AI for Healthcare (grant no. EP/S023283/1), D.R. was supported by EPSRC Center for Doctoral Training in Sensors Technologies and Applications (grant no. EP/L015889/1), S.L. acknowledges funding from National Research Foundation of Korea Grant funded by the Korean Government (NRF-2018R1A6A1A03025761), S.G. acknowledges funding from National Natural Science Foundation of China (grant no. 62171014), L.G.O. acknowledges funding from EPSRC (grants no. EP/K03099X/1, EP/L016087/1, EP/W024284/1, EP/P027628/1), the EU Graphene Flagship Core 3 (grant no. 881603), and Haleon (grant no. G110480). We would like to extend our sincere gratitude to Prof. George Malliaras for his invaluable guidance and mentorship throughout this work as the PhD advisor to C.T. and M.X.Funder: Pragmatic Semiconductor, grant G117793 Haleon, grant G110480 Endoenergy Systems, grant G119004AbstractThis work introduces a silent speech interface (SSI), proposing a few-layer graphene (FLG) strain sensing mechanism based on thorough cracks and AI-based self-adaptation capabilities that overcome the limitations of state-of-the-art technologies by simultaneously achieving high accuracy, high computational efficiency, and fast decoding speed while maintaining excellent user comfort. We demonstrate its application in a biocompatible textile-integrated ultrasensitive strain sensor embedded into a smart choker, which conforms to the user’s throat. Thanks to the structure of ordered through cracks in the graphene-coated textile, the proposed strain gauge achieves a gauge factor of 317 with <5% strain, corresponding to a 420% improvement over existing textile strain sensors fabricated by printing and coating technologies reported to date. Its high sensitivity allows it to capture subtle throat movements, simplifying signal processing and enabling the use of a computationally efficient neural network. The resulting neural network, based on a one-dimensional convolutional model, reduces computational load by 90% while maintaining a remarkable 95.25% accuracy in speech decoding. The synergy in sensor design and neural network optimization offers a promising solution for practical, wearable SSI systems, paving the way for seamless, natural silent communication in diverse settings.</jats:p
Recommended from our members
Roadmap on Printable Electronic Materials for Next-Generation Sensors
Abstract
The dissemination of sensors is key to realizing a sustainable, ‘intelligent’ world, where everyday objects and environments are equipped with sensing capabilities to advance the sustainability and quality of our lives—e.g., via smart homes, smart cities, smart healthcare, smart logistics, Industry 4.0, and precision agriculture. The realization of the full potential of these applications critically depends on the availability of easy-to-make, low-cost sensor technologies. Sensors based on printable electronic materials offer the ideal platform: they can be fabricated through simple methods (e.g., printing and coating) and are compatible with high-throughput roll-to-roll processing. Moreover, printable electronic materials often allow the fabrication of sensors on flexible/stretchable/biodegradable substrates, thereby enabling the deployment of sensors in unconventional settings. Fulfilling the promise of printable electronic materials for sensing will require materials and device innovations to enhance their ability to transduce external stimuli—light, ionizing radiation, pressure, strain, force, temperature, gas, vapours, humidity, and other chemical and biological analytes. This Roadmap brings together the viewpoints of experts in various printable sensing materials—and devices thereof—to provide insights into the status and outlook of the field. Alongside recent materials and device innovations, the roadmap discusses the key outstanding challenges pertaining to each printable sensing technology. Finally, the Roadmap points to promising directions to overcome these challenges and thus enable ubiquitous sensing for a sustainable, ‘intelligent’ world.</jats:p