1,161 research outputs found
Recommended from our members
Machine learning methods for automatic silent speech recognition using a wearable graphene strain gauge sensor
Silent speech recognition is the ability to recognise intended speech without audio information. Useful applications can be found in situations where sound waves are not produced or cannot be heard. Examples include speakers with physical voice impairments or environments in which audio transference is not reliable or secure. Developing a device which can detect non-auditory signals and map them to intended phonation could be used to develop a device to assist in such situations. In this work, we propose a graphene-based strain gauge sensor which can be worn on the throat and detect small muscle movements and vibrations. Machine learning algorithms then decode the non-audio signals and create a prediction on intended speech. The proposed strain gauge sensor is highly wearable, utilising graphene’s unique and beneficial properties including strength, flexibility and high conductivity. A highly flexible and wearable sensor able to pick up small throat movements is fabricated by screen printing graphene onto lycra fabric. A framework for interpreting this information is proposed which explores the use of several machine learning techniques to predict intended words from the signals. A dataset of 15 unique words and four movements, each with 20 repetitions, was developed and used for the training of the machine learning algorithms. The results demonstrate the ability for such sensors to be able to predict spoken words. We produced a word accuracy rate of 55% on the word dataset and 85% on the movements dataset. This work demonstrates a proof-of-concept for the viability of combining a highly wearable graphene strain gauge and machine leaning methods to automate silent speech recognition.EP/S023046/
Machine Learning Methods for Automatic Silent Speech Recognition Using a Wearable Graphene Strain Gauge Sensor.
Silent speech recognition is the ability to recognise intended speech without audio information. Useful applications can be found in situations where sound waves are not produced or cannot be heard. Examples include speakers with physical voice impairments or environments in which audio transference is not reliable or secure. Developing a device which can detect non-auditory signals and map them to intended phonation could be used to develop a device to assist in such situations. In this work, we propose a graphene-based strain gauge sensor which can be worn on the throat and detect small muscle movements and vibrations. Machine learning algorithms then decode the non-audio signals and create a prediction on intended speech. The proposed strain gauge sensor is highly wearable, utilising graphene's unique and beneficial properties including strength, flexibility and high conductivity. A highly flexible and wearable sensor able to pick up small throat movements is fabricated by screen printing graphene onto lycra fabric. A framework for interpreting this information is proposed which explores the use of several machine learning techniques to predict intended words from the signals. A dataset of 15 unique words and four movements, each with 20 repetitions, was developed and used for the training of the machine learning algorithms. The results demonstrate the ability for such sensors to be able to predict spoken words. We produced a word accuracy rate of 55% on the word dataset and 85% on the movements dataset. This work demonstrates a proof-of-concept for the viability of combining a highly wearable graphene strain gauge and machine leaning methods to automate silent speech recognition.EP/S023046/
Structural analysis of the O-acetylated O-polysaccharide isolated from Salmonella Paratyphi A and used for vaccine preparation
Salmonella paratyphi A is increasingly recognized as a common cause of enteric fever cases and there are no licensed vaccines against this infection. Antibodies directed against the O-polysaccharide of the lipopolysaccharide of Salmonella are protective and conjugation of the O-polysaccharide to a carrier protein represents a promising strategy for vaccine development. O-Acetylation of S. paratyphi A O-polysaccharide is considered important for the immunogenicity of S. paratyphi A conjugate vaccines. Here, as part of a programme to produce a bivalent conjugate vaccine against both S. typhi and S. paratyphi A diseases, we have fully elucidated the O-polysaccharide structure of S. paratyphi A by use of HPLC\u2013SEC, HPAEC\u2013PAD/CD, GLC, GLC\u2013MS, 1D and 2D-NMR spectroscopy. In particular, chemical and NMR studies identified the presence of O-acetyl groups on C-2 and C-3 of rhamnose in the lipopolysaccharide repeating unit, at variance with previous reports of O-acetylation at a single position. Moreover HR-MAS NMR analysis performed directly on bacterial pellets from several strains of S. paratyphi A also showed O-acetylation on C-2 and C-3 of rhamnose, thus this pattern is common and not an artefact from O-polysaccharide purification. Conjugation of the O-polysaccharide to the carrier protein had little impact on O-acetylation and therefore should not adversely affect the immunogenicity of the vaccine
Perceive and predict: self-supervised speech representation based loss functions for speech enhancement
Recent work in the domain of speech enhancement has explored the use of self-supervised speech representations to aid in the training of neural speech enhancement models. However, much of this work focuses on using the deepest or final outputs of self supervised speech representation models, rather than the earlier feature encodings. The use of self supervised representations in such a way is often not fully motivated. In this work it is shown that the distance between the feature encodings of clean and noisy speech correlate strongly with psychoacoustically motivated measures of speech quality and intelligibility, as well as with human Mean Opinion Score (MOS) ratings. Experiments using this distance as a loss function are performed and improved performance over the use of STFT spectrogram distance based loss as well as other common loss functions from speech enhancement literature is demonstrated using objective measures such as perceptual evaluation of speech quality (PESQ) and short-time objective intelligibility (STOI)
The eventization of leisure and the strange death of alternative Leeds
The communicative potential of city spaces as leisure spaces is a central assumption of political activism and the creation of alternative, counter-cultural and subcultural scenes. However, such potential for city spaces is limited by the gentrification, privatization and eventization of city centres in the wake of wider societal and cultural struggles over leisure, work and identity formation. In this paper, we present research on alternative scenes in the city of Leeds to argue that the eventization of the city centre has led to a marginalization and of alternative scenes on the fringes of the city. Such marginalization has not caused the death of alternative Leeds or political activism associated with those scenes—but it has changed the leisure spaces (physical, political and social) in which alternative scenes contest the mainstream
Perceive and predict: self-supervised speech representation based loss functions for speech enhancement
Recent work in the domain of speech enhancement has explored the use of self-supervised speech representations to aid in the training of neural speech enhancement models. However, much of this work focuses on using the deepest or final outputs of self supervised speech representation models, rather than the earlier feature encodings. The use of self supervised representations in such a way is often not fully motivated. In this work it is shown that the distance between the feature encodings of clean and noisy speech correlate strongly with psychoacoustically motivated measures of speech quality and intelligibility, as well as with human Mean Opinion Score (MOS) ratings. Experiments using this distance as a loss function are performed and improved performance over the use of STFT spectrogram distance based loss as well as other common loss functions from speech enhancement literature is demonstrated using objective measures such as perceptual evaluation of speech quality (PESQ) and short-time objective intelligibility (STOI)
Arsenic exposure and outcomes of antimonial treatment in visceral leishmaniasis patients in bihar, India:a retrospective cohort study
Funding: This work was supported by a Clinical PhD Fellowship to MRP (090665) and a Principal Research Fellowship to AHF (079838) from the Wellcome Trust (http://www.wellcome.ac.uk). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer reviewedPublisher PD
- …