1,000 research outputs found

    A comparison of statistical machine learning methods in heartbeat detection and classification

    Get PDF
    In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms

    A Centralized Energy Management System for Wireless Sensor Networks

    Get PDF
    This document presents the Centralized Energy Management System (CEMS), a dynamic fault-tolerant reclustering protocol for wireless sensor networks. CEMS reconfigures a homogeneous network both periodically and in response to critical events (e.g. cluster head death). A global TDMA schedule prevents costly retransmissions due to collision, and a genetic algorithm running on the base station computes cluster assignments in concert with a head selection algorithm. CEMS\u27 performance is compared to the LEACH-C protocol in both normal and failure-prone conditions, with an emphasis on each protocol\u27s ability to recover from unexpected loss of cluster heads

    Sports Data Mining Technology Used in Basketball Outcome Prediction

    Get PDF
    Driven by the increasing comprehensive data in sports datasets and data mining technique successfully used in different area, sports data mining technique emerges and enables us to find hidden knowledge to impact the sport industry. In many instances, predicting the outcomes of sporting events has always been a challenging and attractive work and is therefore drawing a wide concern to conduct research in this field. This project focuses on using machine learning algorithms to build a model for predicting the NBA game outcomes and the algorithms involve Simple Logistics Classifier, Artificial Neural Networks, SVM and Naïve Bayes. In order to complete a convincing result, data of 5 regular NBA seasons was collected for model training and data of 1 NBA regular season was used as scoring dataset. After processes of automated data collection and cloud techniques enabled data management, a data mart containing NBA statistics data is built. Then machine learning models mentioned above is trained and tested by consuming data in the data mart. After applying scoring dataset to evaluate the model accuracy, Simple Logistics Classifier finally yields the best result with an accuracy of 69.67%. The results obtained are compared to other methods from different source. It was found that results of this project are more persuasive since such a vast quantity of data was applied in this project. Meanwhile, it can be referenced for the future work

    Investigation of Secure Health Monitoring System Using IOT

    Get PDF
    The rapid progress of technology, particularly the Internet of Things (IoT), has introduced exciting opportunities for transforming the healthcare sector. One significant area where IoT has made a significant impact is in the creation of secure health monitoring systems. These systems utilize IoT devices and sensors to gather and transmit live health data, facilitating remote monitoring and individualized healthcare.The integration of IoT in healthcare monitoring offers numerous benefits, including improved patient outcomes, enhanced access to care, and increased efficiency in healthcare delivery.To develop you would typically follow a research methodology that involves several key steps. Clearly state the objectives of your research, such as designing and implementing a secure health monitoring system using IoT. Specify the aspects you want to focus on, such as data privacy, authentication, encryption, or device communication. Develop a high-level system architecture for your health monitoring system. Define the components, their functionalities, and how they interact with each other. Consider the security aspects, such as secure data transmission, authentication, access control, and data storage.By multiplying each of our goals by a weight provided by the user, we can scale our collection of goals into a single goal using the weighted sum approach. One of the most popular strategies is this one. Finding the appropriate weights to give each aim while using the weighted sum approach is a concern. Taken as alternative parameters for HMS1, HMS2, HMS3, HMS4, HMS5. Taken as evaluation parameters for Portability,Round-The-Clock Health Surveillance,ease of use,Reliability.HMS1 performance is good when compared to others so HMS 1 is preferred except HMS 1 performed better in secure health monitoring system using IIOD

    The application of data mining techniques to interrogate Western Australian water catchment data sets

    Get PDF
    Current environmental challenges such as increasing dry land salinity, waterlogging, eutrophication and high nutrient runoff in south western regions of Western Australia may have both cultural and environmental implications in the near future. Advances in computer science disciplines, more specifically, data mining techniques and geographic information services provide the means to be able to conduct longitudinal climate studies to predict changes in the Water catchment areas of Western Australia. The research proposes to utilise existing spatial data mining techniques in conjunction of modern open-source geospatial tools to interpret trends in Western Australian water catchment land use. This will be achieved through the development of a innovative data mining interrogation tool that measures and validates the effectiveness of data mining methods on a sample water catchment data set from the Peel Harvey region of WA. In doing so, the current and future statistical evaluation on potential dry land salinity trends can be eluded. The interrogation tool will incorporate different modern geospatial data mining techniques to discover meaningful and useful patterns specific to current agricultural problem domain of dry land salinity. Large GIS data sets of the water catchments on Peel-Harvey region have been collected by the state government Shared Land Information Platform in conjunction with the LandGate agency. The proposed tool will provide an interface for data analysis of water catchment data sets by benchmarking measures using the chosen data mining techniques, such as: classical statistical methods, cluster analysis and principal component analysis.The outcome of research will be to establish an innovative data mining instrument tool for interrogating salinity issues in water catchment in Western Australia, which provides a user friendly interface for use by government agencies, such as Department of Agriculture and Food of Western Australia researchers and other agricultural industry stakeholders

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Big Earth Data and Machine Learning for Sustainable and Resilient Agriculture

    Full text link
    Big streams of Earth images from satellites or other platforms (e.g., drones and mobile phones) are becoming increasingly available at low or no cost and with enhanced spatial and temporal resolution. This thesis recognizes the unprecedented opportunities offered by the high quality and open access Earth observation data of our times and introduces novel machine learning and big data methods to properly exploit them towards developing applications for sustainable and resilient agriculture. The thesis addresses three distinct thematic areas, i.e., the monitoring of the Common Agricultural Policy (CAP), the monitoring of food security and applications for smart and resilient agriculture. The methodological innovations of the developments related to the three thematic areas address the following issues: i) the processing of big Earth Observation (EO) data, ii) the scarcity of annotated data for machine learning model training and iii) the gap between machine learning outputs and actionable advice. This thesis demonstrated how big data technologies such as data cubes, distributed learning, linked open data and semantic enrichment can be used to exploit the data deluge and extract knowledge to address real user needs. Furthermore, this thesis argues for the importance of semi-supervised and unsupervised machine learning models that circumvent the ever-present challenge of scarce annotations and thus allow for model generalization in space and time. Specifically, it is shown how merely few ground truth data are needed to generate high quality crop type maps and crop phenology estimations. Finally, this thesis argues there is considerable distance in value between model inferences and decision making in real-world scenarios and thereby showcases the power of causal and interpretable machine learning in bridging this gap.Comment: Phd thesi

    Machine learning based anomaly detection for industry 4.0 systems.

    Get PDF
    223 p.This thesis studies anomaly detection in industrial systems using technologies from the Fourth Industrial Revolution (4IR), such as the Internet of Things, Artificial Intelligence, 3D Printing, and Augmented Reality. The goal is to provide tools that can be used in real-world scenarios to detect system anomalies, intending to improve production and maintenance processes. The thesis investigates the applicability and implementation of 4IR technology architectures, AI-driven machine learning systems, and advanced visualization tools to support decision-making based on the detection of anomalies. The work covers a range of topics, including the conception of a 4IR system based on a generic architecture, the design of a data acquisition system for analysis and modelling, the creation of ensemble supervised and semi-supervised models for anomaly detection, the detection of anomalies through frequency analysis, and the visualization of associated data using Visual Analytics. The results show that the proposed methodology for integrating anomaly detection systems in new or existing industries is valid and that combining 4IR architectures, ensemble machine learning models, and Visual Analytics tools significantly enhances theanomaly detection processes for industrial systems. Furthermore, the thesis presents a guiding framework for data engineers and end-users

    High-Performance approaches for Phylogenetic Placement, and its application to species and diversity quantification

    Get PDF
    In den letzten Jahren haben Fortschritte in der Hochdurchsatz-Genesequenzierung, in Verbindung mit dem anhaltenden exponentiellen Wachstum und der Verfügbarkeit von Rechenressourcen, zu fundamental neuen analytischen Ansätzen in der Biologie geführt. Es ist nun möglich den genetischen Inhalt ganzer Organismengemeinschaften anhand einzelner Umweltproben umfassend zu sequenzieren. Solche Methoden sind besonders für die Mikrobiologie relevant. Die Mikrobiologie war zuvor weitgehend auf die Untersuchung jener Mikroben beschränkt, welche im Labor (d.h., in vitro) kultiviert werden konnten, was jedoch lediglich einen kleinen Teil der in der Natur vorkommenden Diversität abdeckt. Im Gegensatz dazu ermöglicht die Hochdurchsatzsequenzierung nun die direkte Erfassung der genetischen Sequenzen eines Mikrobioms, wie es in seiner natürlichen Umgebung vorkommt (d.h., in situ). Ein typisches Ziel von Mikrobiomstudien besteht in der taxonomischen Klassifizierung der in einer Probe enthaltenen Sequenzen (Querysequenzen). Üblicherweise werden phylogenetische Methoden eingesetzt, um detaillierte taxonomische Beziehungen zwischen Querysequenzen und vertrauenswürdigen Referenzsequenzen, die von bereits klassifizierten Organismen stammen, zu bestimmen. Aufgrund des hohen Volumens (106 10 ^ 6 bis 109 10 ^ 9 ) von Querysequenzen, die aus einer Mikrobiom-Probe mittels Hochdurchsatzsequenzierung generiert werden können, ist eine akkurate phylogenetische Baumrekonstruktion rechnerisch nicht mehr möglich. Darüber hinaus erzeugen derzeit üblicherweise verwendete Sequenzierungstechnologien vergleichsweise kurze Sequenzen, die ein begrenztes phylogenetisches Signal aufweisen, was zu einer Instabilität bei der Inferenz der Phylogenien aus diesen Sequenzen führt. Ein weiteres typisches Ziel von Mikrobiomstudien besteht in der Quantifizierung der Diversität innerhalb einer Probe, bzw. zwischen mehreren Proben. Auch hierfür werden üblicherweise phylogenetische Methoden verwendet. Oftmals setzen diese Methoden die Inferenz eines phylogenetischen Baumes voraus, welcher entweder alle Sequenzen, oder eine geclusterte Teilmenge dieser Sequenzen, umfasst. Wie bei der taxonomischen Identifizierung können Analysen, die auf dieser Art von Bauminferenz basieren, zu ungenauen Ergebnissen führen und/oder rechnerisch nicht durchführbar sein. Im Gegensatz zu einer umfassenden phylogenetischen Inferenz ist die phylogenetische Platzierung eine Methode, die den phylogenetischen Kontext einer Querysequenz innerhalb eines etablierten Referenzbaumes bestimmt. Dieses Verfahren betrachtet den Referenzbaum typischerweise als unveränderlich, d.h. der Referenzbaum wird vor, während oder nach der Platzierung einer Sequenz nicht geändert. Dies erlaubt die phylogenetische Platzierung einer Sequenz in linearer Zeit in Bezug auf die Größe des Referenzbaums durchzuführen. In Kombination mit taxonomischen Informationen über die Referenzsequenzen ermöglicht die phylogenetische Platzierung somit die taxonomische Identifizierung einer Sequenz. Darüber hinaus erlaubt eine phylogenetische Platzierung die Anwendung einer Vielzahl zusätzlicher Analyseverfahren, die beispielsweise die Zuordnung der Zusammensetzungen humaner Mikrobiome zu klinisch-diagnostischen Eigenschaften ermöglicht. In dieser Dissertation präsentiere ich meine Arbeit bezüglich des Entwurfs, der Implementierung, und Verbesserung von EPA-ng, einer Hochleistungsimplementierung der phylogenetischen Platzierung anhand des Maximum-Likelihood Modells. EPA-ng wurde entwickelt um auf Milliarden von Querysequenzen zu skalieren und auf Tausenden von Kernen in Systemen mit gemeinsamem und verteiltem Speicher ausgeführt zu werden. EPA-ng beschleunigt auch die Verarbeitungsgeschwindigkeit auf einzelnen Kernen um das bis zu 3030-fache, im Vergleich zu dessen direkten Konkurrenzprogrammen. Vor kurzem haben wir eine zusätzliche Methode für EPA-ng eingeführt, welche die Platzierung in wesentlich größeren Referenzbäumen ermöglicht. Hierfür verwenden wir einen aktiven Speicherverwaltungsansatz, bei dem reduzierter Speicherverbrauch gegen größere Ausführungszeiten eingetauscht wird. Zusätzlich präsentiere ich einen massiv-parallelen Ansatz um die Diversität einer Probe zu quantifizieren, welcher auf den Ergebnissen phylogenetischer Platzierungen basiert. Diese Software, genannt \toolname{SCRAPP}, kombiniert aktuelle Methoden für die Maximum-Likelihood basierte phylogenetische Inferenz mit Methoden zur Abgrenzung molekularer Spezien. Daraus resultiert eine Verteilung der Artenanzahl auf den Kanten eines Referenzbaums für eine gegebene Probe. Darüber hinaus beschreibe ich einen neuartigen Ansatz zum Clustering von Platzierungsergebnissen, anhand dessen der Benutzer den Rechenaufwand reduzieren kann
    corecore