249 research outputs found

    Automatic Generation of Personalized Recommendations in eCoaching

    Get PDF
    Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio

    Metal Cations in Protein Force Fields: From Data Set Creation and Benchmarks to Polarizable Force Field Implementation and Adjustment

    Get PDF
    Metal cations are essential to life. About one-third of all proteins require metal cofactors to accurately fold or to function. Computer simulations using empirical parameters and classical molecular mechanics models (force fields) are the standard tool to investigate proteins’ structural dynamics and functions in silico. Despite many successes, the accuracy of force fields is limited when cations are involved. The focus of this thesis is the development of tools and strategies to create system-specific force field parameters to accurately describe cation-protein interactions. The accuracy of a force field mainly relies on (i) the parameters derived from increasingly large quantum chemistry or experimental data and (ii) the physics behind the energy formula. The first part of this thesis presents a large and comprehensive quantum chemistry data set on a consistent computational footing that can be used for force field parameterization and benchmarking. The data set covers dipeptides of the 20 proteinogenic amino acids with different possible side chain protonation states, 3 divalent cations (Ca2+, Mg2+, and Ba2+), and a wide relative energy range. Crucial properties related to force field development, such as partial charges, interaction energies, etc., are also provided. To make the data available, the data set was uploaded to the NOMAD repository and its data structure was formalized in an ontology. Besides a proper data basis for parameterization, the physics covered by the terms of the additive force field formulation model impacts its applicability. The second part of this thesis benchmarks three popular non-polarizable force fields and the polarizable Drude model against a quantum chemistry data set. After some adjustments, the Drude model was found to reproduce the reference interaction energy substantially better than the non-polarizable force fields, which showed the importance of explicitly addressing polarization effects. Tweaking of the Drude model involved Boltzmann-weighted fitting to optimize Thole factors and Lennard-Jones parameters. The obtained parameters were validated by (i) their ability to reproduce reference interaction energies and (ii) molecular dynamics simulations of the N-lobe of calmodulin. This work facilitates the improvement of polarizable force fields for cation-protein interactions by quantum chemistry-driven parameterization combined with molecular dynamics simulations in the condensed phase. While the Drude model exhibits its potential simulating cation-protein interactions, it lacks description of charge transfer effects, which are significant between cation and protein. The CTPOL model extends the classical force field formulation by charge transfer (CT) and polarization (POL). Since the CTPOL model is not readily available in any of the popular molecular-dynamics packages, it was implemented in OpenMM. Furthermore, an open-source parameterization tool, called FFAFFURR, was implemented that enables the (system specific) parameterization of OPLS-AA and CTPOL models. Following the method established in the previous part, the performance of FFAFFURR was evaluated by its ability to reproduce quantum chemistry energies and molecular dynamics simulations of the zinc finger protein. In conclusion, this thesis steps towards the development of next-generation force fields to accurately describe cation-protein interactions by providing (i) reference data, (ii) a force field model that includes charge transfer and polarization, and (iii) a freely-available parameterization tool.Metallkationen sind für das Leben unerlässlich. Etwa ein Drittel aller Proteine benötigen Metall-Cofaktoren, um sich korrekt zu falten oder zu funktionieren. Computersimulationen unter Verwendung empirischer Parameter und klassischer Molekülmechanik-Modelle (Kraftfelder) sind ein Standardwerkzeug zur Untersuchung der strukturellen Dynamik und Funktionen von Proteinen in silico. Trotz vieler Erfolge ist die Genauigkeit der Kraftfelder begrenzt, wenn Kationen beteiligt sind. Der Schwerpunkt dieser Arbeit liegt auf der Entwicklung von Werkzeugen und Strategien zur Erstellung systemspezifischer Kraftfeldparameter zur genaueren Beschreibung von Kationen-Protein-Wechselwirkungen. Die Genauigkeit eines Kraftfelds hängt hauptsächlich von (i) den Parametern ab, die aus immer größeren quantenchemischen oder experimentellen Daten abgeleitet werden, und (ii) der Physik hinter der Kraftfeld-Formel. Im ersten Teil dieser Arbeit wird ein großer und umfassender quantenchemischer Datensatz auf einer konsistenten rechnerischen Grundlage vorgestellt, der für die Parametrisierung und das Benchmarking von Kraftfeldern verwendet werden kann. Der Datensatz umfasst Dipeptide der 20 proteinogenen Aminosäuren mit verschiedenen möglichen Seitenketten-Protonierungszuständen, 3 zweiwertige Kationen (Ca2+, Mg2+ und Ba2+) und einen breiten relativen Energiebereich. Wichtige Eigenschaften für die Entwicklung von Kraftfeldern, wie Wechselwirkungsenergien, Partialladungen usw., werden ebenfalls bereitgestellt. Um die Daten verfügbar zu machen, wurde der Datensatz in das NOMAD-Repository hochgeladen und seine Datenstruktur wurde in einer Ontologie formalisiert. Neben einer geeigneten Datenbasis für die Parametrisierung beeinflusst die Physik, die von den Termen des additiven Kraftfeld-Modells abgedeckt wird, dessen Anwendbarkeit. Der zweite Teil dieser Arbeit vergleicht drei populäre nichtpolarisierbare Kraftfelder und das polarisierbare Drude-Modell mit einem Datensatz aus der Quantenchemie. Nach einigen Anpassungen stellte sich heraus, dass das Drude-Modell die Referenzwechselwirkungsenergie wesentlich besser reproduziert als die nichtpolarisierbaren Kraftfelder, was zeigt, wie wichtig es ist, Polarisationseffekte explizit zu berücksichtigen. Die Anpassung des Drude-Modells umfasste eine Boltzmann-gewichtete Optimierung der Thole-Faktoren und Lennard-Jones-Parameter. Die erhaltenen Parameter wurden validiert durch (i) ihre Fähigkeit, Referenzwechselwirkungsenergien zu reproduzieren und (ii) Molekulardynamik-Simulationen des Calmodulin-N-Lobe. Diese Arbeit demonstriert die Verbesserung polarisierbarer Kraftfelder für Kationen-Protein-Wechselwirkungen durch quantenchemisch gesteuerte Parametrisierung in Kombination mit Molekulardynamiksimulationen in der kondensierten Phase. Während das Drude-Modell sein Potenzial bei der Simulation von Kation - Protein - Wechselwirkungen zeigt, fehlt ihm die Beschreibung von Ladungstransfereffekten, die zwischen Kation und Protein von Bedeutung sind. Das CTPOL-Modell erweitert die klassische Kraftfeldformulierung um den Ladungstransfer (CT) und die Polarisation (POL). Da das CTPOL-Modell in keinem der gängigen Molekulardynamik-Pakete verfügbar ist, wurde es in OpenMM implementiert. Außerdem wurde ein Open-Source-Parametrisierungswerkzeug namens FFAFFURR implementiert, welches die (systemspezifische) Parametrisierung von OPLS-AA und CTPOL-Modellen ermöglicht. In Anlehnung an die im vorangegangenen Teil etablierte Methode wurde die Leistung von FFAFFURR anhand seiner Fähigkeit, quantenchemische Energien und Molekulardynamiksimulationen des Zinkfingerproteins zu reproduzieren, bewertet. Zusammenfassend lässt sich sagen, dass diese Arbeit einen Schritt in Richtung der Entwicklung von Kraftfeldern der nächsten Generation zur genauen Beschreibung von Kationen-Protein-Wechselwirkungen darstellt, indem sie (i) Referenzdaten, (ii) ein Kraftfeldmodell, das Ladungstransfer und Polarisation einschließt, und (iii) ein frei verfügbares Parametrisierungswerkzeug bereitstellt

    Semantic Data Management in Data Lakes

    Full text link
    In recent years, data lakes emerged as away to manage large amounts of heterogeneous data for modern data analytics. One way to prevent data lakes from turning into inoperable data swamps is semantic data management. Some approaches propose the linkage of metadata to knowledge graphs based on the Linked Data principles to provide more meaning and semantics to the data in the lake. Such a semantic layer may be utilized not only for data management but also to tackle the problem of data integration from heterogeneous sources, in order to make data access more expressive and interoperable. In this survey, we review recent approaches with a specific focus on the application within data lake systems and scalability to Big Data. We classify the approaches into (i) basic semantic data management, (ii) semantic modeling approaches for enriching metadata in data lakes, and (iii) methods for ontologybased data access. In each category, we cover the main techniques and their background, and compare latest research. Finally, we point out challenges for future work in this research area, which needs a closer integration of Big Data and Semantic Web technologies

    KLM-Style Defeasible Reasoning for Datalog

    Get PDF
    In many problem domains, particularly those related to mathematics and philosophy, classical logic has enjoyed great success as a model of valid reasoning and discourse. For real-world reasoning tasks, however, an agent typically only has partial knowledge of its domain, and at most a statistical understanding of relationships between properties. In this context, classical inference is considered overly restrictive, and many systems for non-monotonic reasoning have been proposed in the literature to deal with these tasks. A notable example is the Klm framework, which describes an agent's defeasible knowledge qualitatively in terms of conditionals of the form “if A, then typically B”. The goal of this research project is to investigate Klm-style semantics for defeasible reasoning over Datalog knowledge bases. Datalog is a declarative logic programming language, designed for querying large deductive databases. Syntactically, it can be viewed as a computationally feasible fragment of firstorder logic, so this continues a recent line of work in which the Klm framework is lifted to more expressive languages

    A Creative Data Ontology for the Moving Image Industry

    Get PDF
    The moving image industry produces an extremely large amount of data and associated metadata for each media creation project, often in the range of terabytes. The current methods used to organise, track, and retrieve the metadata are inadequate, with metadata often being hard to find. The aim of this thesis is to explore whether there is a practical use case for using ontologies to manage metadata in the moving image industry and to determine whether an ontology can be designed for such a purpose and can be used to manage metadata more efficiently to improve workflows. It presents a domain ontology, hereby referred to as the Creative Data Ontology, engineered around a set of metadata fields provided by Evolutions, Double Negative (DNEG), and Pinewood Studios, and four use cases. The Creative Data Ontology is then evaluated using both quantitative methods and qualitative methods (via interviews) with domain and ontology experts.Our findings suggest that there is a practical use case for an ontology-based metadata management solution in the moving image industry. However, it would need to be presented carefully to non-technical users, such as domain experts, as they are likely to experience a steep learning curve. The Creative Data Ontology itself meets the criteria for a high-quality ontology for the sub-sectors of the moving image industry domain that it provides coverage for (i.e. scripted film and television, visual effects, and unscripted television) and it provides a good foundation for expanding into other sub-sectors of the industry, although it cannot yet be considered a ``standard'' ontology. Finally, the thesis presents the methodological process taken to develop the Creative Data Ontology and the lessons learned during the ontology engineering process which can be valuable guidance for designers and developers of future metadata ontologies. We believe such guidance could be transferable across many domains where an ontology of metadata is required, which are unrelated to the moving image industry. Future research may focus on assisting non-technical users to overcome the learning curve, which may also also applicable to other domains that may choose to use ontologies in the future

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field

    Journalistic Knowledge Platforms: from Idea to Realisation

    Get PDF
    Journalistiske kunnskapsplattformer (JKPer) er en type intelligente informasjonssystemer designet for å forbedre nyhetsproduksjonsprosesser ved å kombinere stordata, kunstig intelligens (KI) og kunnskapsbaser for å støtte journalister. Til tross for sitt potensial for å revolusjonere journalistikkfeltet, har adopsjonen av JKPer vært treg, med forskere og store nyhetsutløp involvert i forskning og utvikling av JKPer. Den langsomme adopsjonen kan tilskrives den tekniske kompleksiteten til JKPer, som har ført til at nyhetsorganisasjoner stoler på flere uavhengige og oppgavespesifikke produksjonssystemer. Denne situasjonen kan øke ressurs- og koordineringsbehovet og kostnadene, samtidig som den utgjør en trussel om å miste kontrollen over data og havne i leverandørlåssituasjoner. De tekniske kompleksitetene forblir en stor hindring, ettersom det ikke finnes en allerede godt utformet systemarkitektur som ville lette realiseringen og integreringen av JKPer på en sammenhengende måte over tid. Denne doktoravhandlingen bidrar til teorien og praksisen rundt kunnskapsgrafbaserte JKPer ved å studere og designe en programvarearkitektur som referanse for å lette iverksettelsen av konkrete løsninger og adopsjonen av JKPer. Den første bidraget til denne doktoravhandlingen gir en grundig og forståelig analyse av ideen bak JKPer, fra deres opprinnelse til deres nåværende tilstand. Denne analysen gir den første studien noensinne av faktorene som har bidratt til den langsomme adopsjonen, inkludert kompleksiteten i deres sosiale og tekniske aspekter, og identifiserer de største utfordringene og fremtidige retninger for JKPer. Den andre bidraget presenterer programvarearkitekturen som referanse, som gir en generisk blåkopi for design og utvikling av konkrete JKPer. Den foreslåtte referansearkitekturen definerer også to nye typer komponenter ment for å opprettholde og videreutvikle KI-modeller og kunnskapsrepresentasjoner. Den tredje presenterer et eksempel på iverksettelse av programvarearkitekturen som referanse og beskriver en prosess for å forbedre effektiviteten til informasjonsekstraksjonspipelines. Denne rammen muliggjør en fleksibel, parallell og samtidig integrering av teknikker for naturlig språkbehandling og KI-verktøy. I tillegg diskuterer denne avhandlingen konsekvensene av de nyeste KI-fremgangene for JKPer og ulike etiske aspekter ved bruk av JKPer. Totalt sett gir denne PhD-avhandlingen en omfattende og grundig analyse av JKPer, fra teorien til designet av deres tekniske aspekter. Denne forskningen tar sikte på å lette vedtaket av JKPer og fremme forskning på dette feltet.Journalistic Knowledge Platforms (JKPs) are a type of intelligent information systems designed to augment news creation processes by combining big data, artificial intelligence (AI) and knowledge bases to support journalists. Despite their potential to revolutionise the field of journalism, the adoption of JKPs has been slow, with scholars and large news outlets involved in the research and development of JKPs. The slow adoption can be attributed to the technical complexity of JKPs that led news organisation to rely on multiple independent and task-specific production system. This situation can increase the resource and coordination footprint and costs, at the same time it poses a threat to lose control over data and face vendor lock-in scenarios. The technical complexities remain a major obstacle as there is no existing well-designed system architecture that would facilitate the realisation and integration of JKPs in a coherent manner over time. This PhD Thesis contributes to the theory and practice on knowledge-graph based JKPs by studying and designing a software reference architecture to facilitate the instantiation of concrete solutions and the adoption of JKPs. The first contribution of this PhD Thesis provides a thorough and comprehensible analysis of the idea of JKPs, from their origins to their current state. This analysis provides the first-ever study of the factors that have contributed to the slow adoption, including the complexity of their social and technical aspects, and identifies the major challenges and future directions of JKPs. The second contribution presents the software reference architecture that provides a generic blueprint for designing and developing concrete JKPs. The proposed reference architecture also defines two novel types of components intended to maintain and evolve AI models and knowledge representations. The third presents an instantiation example of the software reference architecture and details a process for improving the efficiency of information extraction pipelines. This framework facilitates a flexible, parallel and concurrent integration of natural language processing techniques and AI tools. Additionally, this Thesis discusses the implications of the recent AI advances on JKPs and diverse ethical aspects of using JKPs. Overall, this PhD Thesis provides a comprehensive and in-depth analysis of JKPs, from the theory to the design of their technical aspects. This research aims to facilitate the adoption of JKPs and advance research in this field.Doktorgradsavhandlin
    corecore