9 research outputs found
Confusion modelling for lip-reading
Lip-reading is mostly used as a means of communication by people with hearing di�fficulties. Recent work has explored the automation of this process, with the aim
of building a speech recognition system entirely driven by lip movements. However, this work has so far produced poor results because of factors such as high variability
of speaker features, diffi�culties in mapping from visual features to speech sounds, and high co-articulation of visual features.
The motivation for the work in this thesis is inspired by previous work in dysarthric speech recognition [Morales, 2009]. Dysathric speakers have poor control over their
articulators, often leading to a reduced phonemic repertoire. The premise of this thesis is that recognition of the visual speech signal is a similar problem to recog-
nition of dysarthric speech, in that some information about the speech signal has been lost in both cases, and this brings about a systematic pattern of errors in the
decoded output.
This work attempts to exploit the systematic nature of these errors by modelling them in the framework of a weighted finite-state transducer cascade. Results
indicate that the technique can achieve slightly lower error rates than the conventional approach. In addition, it explores some interesting more general questions for
automated lip-reading
Building Security Protocols Against Powerful Adversaries
As our sensitive data is increasingly carried over the Internet and stored remotely, security in communications becomes a fundamental requirement. Yet, today's security practices are designed around assumptions the validity of which is being challenged. In this thesis we design new security mechanisms for certain scenarios where traditional security assumptions do not hold. First, we design secret-agreement protocols for wireless networks, where the security of the secrets does not depend on assumptions about the computational limitations of adversaries. Our protocols leverage intrinsic characteristics of the wireless to enable nodes to agree on common pairwise secrets that are secure against computationally unconstrained adversaries. Through testbed and simulation experimentation, we show that it is feasible in practice to create thousands of secret bits per second. Second, we propose a traffic anonymization scheme for wireless networks. Our protocol aims in providing anonymity in a fashion similar to Tor - yet being resilient to computationally unbounded adversaries - by exploiting the security properties of our secret-agreement. Our analysis and simulation results indicate that our scheme can offer a level of anonymity comparable to the level of anonymity that Tor does. Third, we design a lightweight data encryption protocol for protecting against computationally powerful adversaries in wireless sensor networks. Our protocol aims in increasing the inherent weak security that network coding naturally offers, at a low extra overhead. Our extensive simulation results demonstrate the additional security benefits of our approach. Finally, we present a steganographic mechanism for secret message exchange over untrustworthy messaging service providers. Our scheme masks secret messages into innocuous texts, aiming in hiding the fact that secret message exchange is taking place. Our results indicate that our schemes succeeds in communicating hidden information at non-negligible rates
Recommended from our members
The Roles of Language Models and Hierarchical Models in Neural Sequence-to-Sequence Prediction
With the advent of deep learning, research in many areas of machine learning is converging towards the same set of methods and models. For example, long short-term memory networks are not only popular for various tasks in natural language processing (NLP) such as speech recognition, machine translation, handwriting recognition, syntactic parsing, etc., but they are also applicable to seemingly unrelated fields such as robot control, time series prediction, and bioinformatics. Recent advances in contextual word embeddings like BERT boast with achieving state-of-the-art results on 11 NLP tasks with the same model. Before deep learning, a speech recognizer and a syntactic parser used to have little in common as systems were much more tailored towards the task at hand.
At the core of this development is the tendency to view each task as yet another data mapping problem, neglecting the particular characteristics and (soft) requirements tasks often have in practice. This often goes along with a sharp break of deep learning methods with previous research in the specific area. This work can be understood as an antithesis to this paradigm. We show how traditional symbolic statistical machine translation models can still improve neural machine translation (NMT) while reducing the risk for common pathologies of NMT such as hallucinations and neologisms. Other external symbolic models such as spell checkers and morphology databases help neural grammatical error correction. We also focus on language models that often do not play a role in vanilla end-to-end approaches and apply them in different ways to word reordering, grammatical error correction, low-resource NMT, and document-level NMT. Finally, we demonstrate the benefit of hierarchical models in sequence-to-sequence prediction. Hand-engineered covering grammars are effective in preventing catastrophic errors in neural text normalization systems. Our operation sequence model for interpretable NMT represents translation as a series of actions that modify the translation state, and can also be seen as derivation in a formal grammar.EPSRC grant EP/L027623/1
EPSRC Tier-2 capital grant EP/P020259/
Aldaera linguistikoen normalizazioa inferentzia fonologikoa eta morfologikoa erabiliz
221 p.Tesi-lan hau hizkuntzaren azterketa eta prozesamenduaren arlokoa da eta testu ez-estandarren ikertze-lerroan garatu da, euskarazko testu ez-estandarren normalizazioa izanik lanaren gai nagusiTestu estandarrekin alderatuta, testu ez-estandarrek ezaugarri bereziak dituzte maila lexikoan, morfologikoan edota fonologikoan, eta haien prozesaketa erronka berri bat da. Testu horiek, oro har, ezin dira ohiko moduan prozesatu hizkuntza prozesatzeko tresna gehienak (NLP, Natural Language Processing tresnak) hizkuntza estandarretan idatzitako testuak prozesatzeko garatu direlako, eta testu ez-estandarrekin erabiltzen direnean asko jaisten da haien errendimendua. Halako testuak prozesatzeko interesa, ordea, asko zabaldu da azken urteetan: liburutegi digitalak, humanitate digitalak, soziolinguistika konputazionala, iritzien analisia eta abar. Testu ez-estandarrak normalizatuz gero, aukera dago NLP tresnak aplikatzeko testu horietan eta horretarako funtsezkoa da prozesu hori ahalik eta modurik eraginkorrenean betetzea. Tesi-lan honetan ikasketa automatikoan oinarritzen diren metodoak proposatzen dira euskarazko testu ez-estandarretan normalizazioaren ataza ebazteko. Horrekin batera, metodoek lortzen dituzten emaitzak konparatzen dira beste ikerketa batzuek lortzen dituztenekin, horrela metodoen egokitasuna aztertzeko. Konparazio hori egiteko gaztelaniazko zein eslovenierazko corpusak erabili dira, beste zenbait ikerlariren lankidetza baliatuz
Natural Language Processing: Emerging Neural Approaches and Applications
This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains