4 research outputs found

    Modeling Weather Impact on a Secondary Electrical Grid

    Get PDF
    Weather can cause problems for underground electrical grids by increasing the probability of serious “manhole events” such as fires and explosions. In this work, we compare a model that incorporates weather features associated with the dates of serious events into a single logistic regression, with a more complex approach that has three interdependent log linear models for weather, baseline manhole vulnerability, and vulnerability of manholes to weather. The latter approach more naturally incorporates the dependencies between the weather, structure properties, and structure vulnerability

    Algorithms for information extraction and signal annotation on long-term biosignals using clustering techniques

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaOne of the biggest challenges when analysing data is to extract information from it, especially if we dealing with very large sized data, which brings a new set of barriers to be overcome. The extracted information can be used to aid physicians in their diagnosis since biosignals often carry vital information on the subjects. In this research work, we present a signal-independent algorithm with two main goals: perform events detection in biosignals and, with those events, extract information using a set of distance measures which will be used as input to a parallel version of the k-means clustering algorithm. The first goal is achieved by using two different approaches. Events can be found based on peaks detection through an adaptive threshold defined as the signal’s root mean square (RMS) or by morphological analysis through the computation of the signal’s meanwave. The final goal is achieved by dividing the distance measures into n parts and by performing k-means individually. In order to improve speed performance, parallel computing techniques were applied. For this study, a set of different types of signals was acquired and annotated by our algorithm. By visual inspection, the L1 and L2 Minkowski distances returned an output that allowed clustering signals’ cycles with an efficiency of 97:5% and 97:3%, respectively. Using the meanwave distance, our algorithm achieved an accuracy of 97:4%. For the downloaded ECGs from the Physionet databases, the developed algorithm detected 638 out of 644 manually annotated events provided by physicians. The fact that this algorithm can be applied to long-term raw biosignals and without requiring any prior information about them makes it an important contribution in biosignals’ information extraction and annotation

    Automated Detection of Anomalous Patterns in Validation Scores for Protein X-Ray Structure Models

    Get PDF
    Structural bioinformatics is a subdomain of data mining focused on identifying structural patterns relevant to functional attributes in repositories of biological macromolecular structure models. This research focused on structures determined via x-ray crystallography and deposited in the Protein Data Bank (PDB). Protein structures deposited in the PDB are products of experimental processes, and only approximately model physical reality. Structural biologists address accuracy and precision concerns via community-enforced consensus standards of accepted practice for proper building, refinement, and validation of models. Validation scores are quantitative partial indicators of the likelihood that a model contains serious systematic errors. The PDB recently convened a panel of experts, which placed renewed emphasis on troubling anomalies among deposited structure models. This study set out to detect such anomalies. I hypothesized that community consensus standards would be evident in patterns of validation scores, and deviations from those standards would appear as unusual combinations of validation scores. Validation attributes were extracted from PDB entry headers and multiple software tools (e.g., WhatCheck, SFCheck, and MolProbity). Independent component analysis (ICA) was used for attribute transformation to increase contrast between inliers and outliers. Unusual patterns were sought in regions of locally low density in the space of validation score profiles, using a novel standardization of Local Outlier Factor (LOF) scores. Validation score profiles associated with the most extreme outlier scores were demonstrably anomalous according to domain theory. Among these were documented fabrications, possible annotation errors, and complications in the underlying experimental data. Analysis of deep inliers revealed promising support for the hypothesized link between consensus standard practices and common validation score values. Unfortunately, with numerical anomaly detection methods that operate simultaneously on numerous continuous-valued attributes, it is often quite difficult to know why a case gets a particular outlier score. Therefore, I hypothesized that IF-THEN rules could be used to post-process outlier scores to make them comprehensible and explainable. Inductive rule extraction was performed using RIPPER. Results were mixed, but they represent a promising proof of concept. The methods explored are general and applicable beyond this problem. Indeed, they could be used to detect structural anomalies using physical attributes

    Efficient Reinforcement Learning for Autonomous Navigation

    Get PDF
    Immer mehr Autoren betrachten das Konzept der rationalen Agenten als zentral für den Zugang zur künstlichen Intelligenz. Das Ziel dieser Arbeit war es diesen Zugang zu verbessern. Also einen rationalen Roboter-Agenten zu konzipieren, zu implementieren und in mehreren realen Umgebungen zu testen. Der Roboter-Agent soll selbständig die Lösung für das gestellte, anspruchsvolle Navigationsproblem erlernen. Der Schwerpunkt liegt nicht in der Erstellung einer Umgebungskarte, sondern in der Entwicklung von Methoden, die dem Agenten erlauben das Navigationsproblem in unterschiedlichen Umgebungen selbständig zu lösen und die gefundenen Lösungen ständig zu verbessern. Viele Methoden der modernen Künstlichen Intelligenz, wie neuronale Netze, Evolutionäre Algorithmen und Reinforcement-Learning kommen in dieser Arbeit zur Geltung. Bei der Entwicklung der Agenten wird die bekannte Reinforcement-Learning-Methode angewendet. Durch Einbindung vorhandener und bisher ungenutzter Informationen wird der Lernprozess effizienter. Weiterhin wird durch die Gestaltung der im rationalen Agenten angewendeten Architektur die Anzahl der zur Lösung der Aufgabe benötigten Entscheidungsschritte signifikant reduziert, was in einer Effizienzsteigerung des Lernprozesses resultiert. Der mit passender Architektur und mit effizienten Lernmethoden ausgestattete rationale Agent kann direkt in der Realität seinen Weg erlernen und nach jedem Durchlauf verbessern
    corecore