15 research outputs found

    Modelling hierarchical musical structures with composite probabilistic networks

    Get PDF
    The thesis is organised as follows:• Chapter 2 provides background information on existing research in the field of computational music harmonisation and generation, as well as some the¬ oretical background on musical structures. Finally, the chapter concludes with an outline of the scope and aims of this research.• Chapter 3 provides a short overview of the field of Machine Learning, ex¬ plaining concepts such as entropy measures and smoothing. The definitions of Markov chains and Hidden Markov models are introduced together with their methods of inference.• Chapter 4 begins with the definition of Hierarchical Hidden Markov models and techniques for linear time inference. It continues by introducing the new concept of Input-Output HHMMs, an extension to the hierarchical models that is derived from Input-Output HMMs.• Chapter 5 is a short chapter that shows the importance of the music rep¬ resentation and model structures for this research, and gives details of the representation.• Chapter 6 outlines the design of the software used for the HHMM modelling, and gives details of the software implementation and use.• Chapter 7 describes how dynamic networks of models were used for the generation of new pieces of music using a "random walk" approach. Several different types of networks are presented, exploring the different possibilities of layering the musical structures and organising the networks.• Chapter 8 tries to evaluate musical examples that were generated with sev¬ eral different types of networks. The evaluation process is both subjective and objective, using the results of a listening experiment as well as cross entropy measures and musical theoretical rules.• Chapter 9 offers a discussion of the methodology of the approach, the con¬ figuration and design of networks and models as well as the learning and generation of the new musical structures.• Chapter 10 concludes the thesis by summarising the research's contribu¬ tions, evaluating whether the project scope has been fulfilled and the major goals of the research have been met

    On the Challenges of Fully Incremental Neural Dependency Parsing

    Full text link
    Since the popularization of BiLSTMs and Transformer-based bidirectional encoders, state-of-the-art syntactic parsers have lacked incrementality, requiring access to the whole sentence and deviating from human language processing. This paper explores whether fully incremental dependency parsing with modern architectures can be competitive. We build parsers combining strictly left-to-right neural encoders with fully incremental sequence-labeling and transition-based decoders. The results show that fully incremental parsing with modern architectures considerably lags behind bidirectional parsing, noting the challenges of psycholinguistically plausible parsing.Comment: Accepted at IJCNLP-AACL 202

    Broad-coverage model of prediction in human sentence processing

    Get PDF
    The aim of this thesis is to design and implement a cognitively plausible theory of sentence processing which incorporates a mechanism for modeling a prediction and verification process in human language understanding, and to evaluate the validity of this model on specific psycholinguistic phenomena as well as on broad-coverage, naturally occurring text. Modeling prediction is a timely and relevant contribution to the field because recent experimental evidence suggests that humans predict upcoming structure or lexemes during sentence processing. However, none of the current sentence processing theories capture prediction explicitly. This thesis proposes a novel model of incremental sentence processing that offers an explicit prediction and verification mechanism. In evaluating the proposed model, this thesis also makes a methodological contribution. The design and evaluation of current sentence processing theories are usually based exclusively on experimental results from individual psycholinguistic experiments on specific linguistic structures. However, a theory of language processing in humans should not only work in an experimentally designed environment, but should also have explanatory power for naturally occurring language. This thesis first shows that the Dundee corpus, an eye-tracking corpus of newspaper text, constitutes a valuable additional resource for testing sentence processing theories. I demonstrate that a benchmark processing effect (the subject/object relative clause asymmetry) can be detected in this data set (Chapter 4). I then evaluate two existing theories of sentence processing, Surprisal and Dependency Locality Theory (DLT), on the full Dundee corpus. This constitutes the first broad-coverage comparison of sentence processing theories on naturalistic text. I find that both theories can explain some of the variance in the eye-movement data, and that they capture different aspects of sentence processing (Chapter 5). In Chapter 6, I propose a new theory of sentence processing, which explicitly models prediction and verification processes, and aims to unify the complementary aspects of Surprisal and DLT. The proposed theory implements key cognitive concepts such as incrementality, full connectedness, and memory decay. The underlying grammar formalism is a strictly incremental version of Tree-adjoining Grammar (TAG), Psycholinguistically motivated TAG (PLTAG), which is introduced in Chapter 7. I then describe how the Penn Treebank can be converted into PLTAG format and define an incremental, fully connected broad-coverage parsing algorithm with associated probability model for PLTAG. Evaluation of the PLTAG model shows that it achieves the broad coverage required for testing a psycholinguistic theory on naturalistic data. On the standardized Penn Treebank test set, it approaches the performance of incremental TAG parsers without prediction (Chapter 8). Chapter 9 evaluates the psycholinguistic aspects of the proposed theory by testing it both on a on a selection of established sentence processing phenomena and on the Dundee eye-tracking corpus. The proposed theory can account for a larger range of psycholinguistic case studies than previous theories, and is a significant positive predictor of reading times on broad-coverage text. I show that it can explain a larger proportion of the variance in reading times than either DLT integration cost or Surprisal

    Modelling Incremental Self-Repair Processing in Dialogue.

    Get PDF
    PhDSelf-repairs, where speakers repeat themselves, reformulate or restart what they are saying, are pervasive in human dialogue. These phenomena provide a window into real-time human language processing. For explanatory adequacy, a model of dialogue must include mechanisms that account for them. Artificial dialogue agents also need this capability for more natural interaction with human users. This thesis investigates the structure of self-repair and its function in the incremental construction of meaning in interaction. A corpus study shows how the range of self-repairs seen in dialogue cannot be accounted for by looking at surface form alone. More particularly it analyses a string-alignment approach and shows how it is insufficient, provides requirements for a suitable model of incremental context and an ontology of self-repair function. An information-theoretic model is developed which addresses these issues along with a system that automatically detects self-repairs and edit terms on transcripts incrementally with minimal latency, achieving state-of-the-art results. Additionally it is shown to have practical use in the psychiatric domain. The thesis goes on to present a dialogue model to interpret and generate repaired utterances incrementally. When processing repaired rather than fluent utterances, it achieves the same degree of incremental interpretation and incremental representation. Practical implementation methods are presented for an existing dialogue system. Finally, a more pragmatically oriented approach is presented to model self-repairs in a psycholinguistically plausible way. This is achieved through extending the dialogue model to include a probabilistic semantic framework to perform incremental inference in a reference resolution domain. The thesis concludes that at least as fine-grained a model of context as word-by-word is required for realistic models of self-repair, and context must include linguistic action sequences and information update effects. The way dialogue participants process self-repairs to make inferences in real time, rather than filter out their disfluency effects, has been modelled formally and in practical systems.Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Account (DTA) scholarship from the School of Electronic Engineering and Computer Science at Queen Mary University of London

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    Master of Science in Nuclear Engineering

    Get PDF
    thesisThe space radiation environment is a significant challenge to future manned and unmanned space travels. Future missions will rely more on accurate simulations of radiation transport in space through spacecraft to predict astronaut dose and energy deposition within spacecraft electronics. The International Space Station provides long-term measurements of the radiation environment in Low Earth Orbit (LEO); however, only the Apollo missions provided dosimetry data beyond LEO. Thus dosimetry analysis for deep space missions is poorly supported with currently available data, and there is a need to develop dosimetry-predicting models for extended deep space missions. GEANT4, a Monte Carlo Method, provides a powerful toolkit in C++ for simulation of radiation transport in arbitrary media, thus including the spacecraft and space travels. The newest version of GEANT4 supports multithreading and MPI, resulting in faster distributive processing of simula- tions in high-performance computing clusters. This thesis introduces a new application based on GEANT4 that greatly reduces computational time using Kingspeak and Ember computational clus- ters at the Center for High Performance Computing (CHPC) to simulate radiation transport through full spacecraft geometry, reducing simulation time to hours instead of weeks without post simula- tion processing. Additionally, this thesis introduces a new set of detectors besides the historically used International Commission of Radiation Units (ICRU) spheres for calculating dose distribution, including a Thermoluminescent Detector (TLD), Tissue Equivalent Proportional Counter (TEPC), and human phantom combined with a series of new primitive scorers in GEANT4 to calculate dose equivalence based on the International Commission of Radiation Protection (ICRP) standards. The developed models in this thesis predict dose depositions in the International Space Station and during the Apollo missions showing good agreement with experimental measurements. From these models the greatest contributor to radiation dose for the Apollo missions was from Galactic Cosmic Rays due to the short time within the radiation belts. The Apollo 14 dose measurements were an order of magnitude higher compared to other Apollo missions. The GEANT4 model of the Apollo Command Module shows consistent doses due to Galactic Cosmic Rays and Radiation Belts for all missions, with a small variation in dose distribution across the capsule. The model also predicts well the dose depositions and equivalent dose values in various human organs for the International Space Station or Apollo Command Module

    Design and management of pervasive eCare services

    Get PDF

    Concept of a Robust & Training-free Probabilistic System for Real-time Intention Analysis in Teams

    Get PDF
    Die Arbeit beschäftigt sich mit der Analyse von Teamintentionen in Smart Environments (SE). Die fundamentale Aussage der Arbeit ist, dass die Entwicklung und Integration expliziter Modelle von Nutzeraufgaben einen wichtigen Beitrag zur Entwicklung mobiler und ubiquitärer Softwaresysteme liefern können. Die Arbeit sammelt Beschreibungen von menschlichem Verhalten sowohl in Gruppensituationen als auch Problemlösungssituationen. Sie untersucht, wie SE-Projekte die Aktivitäten eines Nutzers modellieren, und liefert ein Teamintentionsmodell zur Ableitung und Auswahl geplanten Teamaktivitäten mittels der Beobachtung mehrerer Nutzer durch verrauschte und heterogene Sensoren. Dazu wird ein auf hierarchischen dynamischen Bayes’schen Netzen basierender Ansatz gewählt
    corecore