121,671 research outputs found

    The LSST Data Mining Research Agenda

    Full text link
    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.Comment: 5 pages, Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    EventNet: Detecting Events in EEG

    Full text link
    Neurologists are often looking for various "events of interest" when analyzing EEG. To support them in this task various machine-learning-based algorithms have been developed. Most of these algorithms treat the problem as classification, thereby independently processing signal segments and ignoring temporal dependencies inherent to events of varying duration. At inference time, the predicted labels for each segment then have to be post processed to detect the actual events. We propose an end-to-end event detection approach (EventNet), based on deep learning, that directly works with events as learning targets, stepping away from ad-hoc postprocessing schemes to turn model outputs into events. We compare EventNet with a state-of-the-art approach for artefact and and epileptic seizure detection, two event types with highly variable durations. EventNet shows improved performance in detecting both event types. These results show the power of treating events as direct learning targets, instead of using ad-hoc postprocessing to obtain them. Our event detection framework can easily be extended to other event detection problems in signal processing, since the deep learning backbone does not depend on any task-specific features.Comment: This work has been submitted to the IEEE for possible publicatio

    Classification methods for noise transients in advanced gravitational-wave detectors

    Get PDF
    Noise of non-astrophysical origin will contaminate science data taken by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) and Advanced Virgo gravitational-wave detectors. Prompt characterization of instrumental and environmental noise transients will be critical for improving the sensitivity of the advanced detectors in the upcoming science runs. During the science runs of the initial gravitational-wave detectors, noise transients were manually classified by visually examining the time-frequency scan of each event. Here, we present three new algorithms designed for the automatic classification of noise transients in advanced detectors. Two of these algorithms are based on Principal Component Analysis. They are Principal Component Analysis for Transients (PCAT), and an adaptation of LALInference Burst (LIB). The third algorithm is a combination of an event generator called Wavelet Detection Filter (WDF) and machine learning techniques for classification. We test these algorithms on simulated data sets, and we show their ability to automatically classify transients by frequency, SNR and waveform morphology

    Towards a smart fall detection system using wearable sensors

    Get PDF
    Empirical thesis."A thesis submitted as part of a cotutelle programme in partial fulfilment of Coventry University’s and Macquarie University’s requirements for the degree of Doctor of Philosophy" -- title page.Bibliography: pages 183-205.1. Introduction -- 2. Literature review -- 3. Falls and activities of daily living datasets -- 4. An analysis of fall-detection approaches -- 5. Event-triggered machine-learning approach (EvenT-ML) -- 6. Genetic-algorithm-based feature-selection technique for fall detection (GA-Fade) -- 7. Conclusions and future work -- References -- Appendices.A fall-detection system is employed in order to monitor an older person or infirm patient and alert their carer when a fall occurs. Some studies use wearable-sensor technologies to detect falls, as those technologies are getting smaller and cheaper. To date, wearable-sensor-based fall-detection approaches are categorised into threshold and machine-learning-based approaches. A high number of false alarms and a high computational cost are issues that are faced by the threshold- and machine-learning basedapproaches, respectively. The goal of this thesis is to address those issues by developing a novel low-computational-cost machine-learning-based approach for fall detection using accelerometer sensors.Toward this goal, existing fall-detection approaches (both threshold- and machine-learning-based) are explored and evaluated using publicly accessible datasets: Cogent, SisFall, and FARSEEING. Four machine-learning algorithms are implemented in this study: Classification and Regression Tree (CART), k-Nearest Neighbour (k-NN), Logistic Regression (LR), and Support Vector Machine (SVM). The experimental results show that using the correct size and type for the sliding window to segment the data stream can give the machine-learning-based approach a better detection rate than the threshold-based approach, though the difference between the threshold- and machine-learning-based approaches is not significant in some cases.To further improve the performance of the machine-learning-based approaches, fall stages (pre-impact, impact, and post-impact) are used as a basis for the feature extraction process. A novel approach called an event-triggered machine-learning approach for fall detection (EvenT-ML) is proposed, which can correctly align fall stages into a data segment and extract features based on those stages. Correctly aligning the stages to a data segment is difficult because of multiple high peaks, where a high peak usually indicates the impact stage, often occurring during the pre-impact stage. EvenT-ML significantly improves the detection rate and reduces the computational cost of existing machine-learning-based approaches, with an up to 97.6% F-score and a reduction in computational cost by a factor of up to 80 during feature extraction. Also, this technique can significantly outperform the threshold-based approach in all cases.Finally, to reduce the computational cost of EvenT-ML even further, the number of features needs to be reduced through a feature-selection process. A novel genetic-algorithm-based feature-selection technique (GA-Fade) is proposed, which uses multiple criteria to select features. GA-Fade considers the detection rate, the computational cost, and the number of sensors used as the selection criteria. GAFade is able to reduce the number of features by 60% on average, while achieving an F-score of up to 97.7%. The selected features also can give a significantly lower total computational cost than features that are selected by two single-criterion-based feature-selection techniques: SelectKBest and Recursive Feature Elimination.In summary, the techniques presented in this thesis significantly increase the detection rate of the machine-learning-based approach, so that a more reliable fall detection system can be achieved. Furthermore, as an additional advantage, these techniques can significantly reduce the computational cost of the machine-learning approach. This advantage indicates that the proposed machine-learning-based approach is more applicable to a small wearable device with limited resources (e.g., computing power and battery capacity) than the existing machine-learning-based approaches.Mode of access: World wide web1 online resource (xx, 211 pages) diagrams, graphs, table

    Improving Patient Care with Machine Learning: A Game-Changer for Healthcare

    Get PDF
    Machine learning has revolutionized the field of healthcare by offering tremendous potential to improve patient care across various domains. This research study aimed to explore the impact of machine learning in healthcare and identify key findings in several areas.Machine learning algorithms demonstrated the ability to detect diseases at an early stage and facilitate accurate diagnoses by analyzing extensive medical data, including patient records, lab results, imaging scans, and genetic information. This capability holds the potential to improve patient outcomes and increase survival rates.The study highlighted that machine learning can generate personalized treatment plans by analyzing individual patient data, considering factors such as medical history, genetic information, and treatment outcomes. This personalized approach enhances treatment effectiveness, reduces adverse events, and contributes to improved patient outcomes.Predictive analytics utilizing machine learning techniques showed promise in patient monitoring by leveraging real-time data such as vital signs, physiological information, and electronic health records. By providing early warnings, healthcare providers can proactively intervene, preventing adverse events and enhancing patient safety.Machine learning played a significant role in precision medicine and drug discovery. By analyzing vast biomedical datasets, including genomics, proteomics, and clinical trial information, machine learning algorithms identified novel drug targets, predicted drug efficacy and toxicity, and optimized treatment regimens. This accelerated drug discovery process holds the potential to provide more effective and personalized treatment options.The study also emphasized the value of machine learning in pharmacovigilance and adverse event detection. By analyzing the FDA Adverse Event Reporting System (FAERS) big data, machine learning algorithms uncovered hidden associations between drugs, medical products, and adverse events, aiding in early detection and monitoring of drug-related safety issues. This finding contributes to improved patient safety and reduced occurrences of adverse events.The research demonstrated the remarkable potential of machine learning in medical imaging analysis. Deep learning algorithms trained on large datasets were able to detect abnormalities in various medical images, facilitating faster and more accurate diagnoses. This technology reduces human error and ultimately leads to improved patient outcomes.While machine learning offers immense benefits, ethical considerations such as patient privacy, algorithm bias, and transparency must be addressed for responsible implementation. Healthcare professionals should remain central to decision-making processes, utilizing machine learning as a tool to enhance their expertise rather than replace it. This study showcases the transformative potential of machine learning in revolutionizing healthcare and improving patient care

    Real-Time Machine Learning for Quickest Detection

    Get PDF
    Safety-critical Cyber-Physical Systems (CPS) require real-time machine learning for control and decision making. One promising solution is to use deep learning to discover useful patterns for event detection from heterogeneous data. However, deep learning algorithms encounter challenges in CPS with assurability requirements: 1) Decision explainability, 2) Real-time and quickest event detection, and 3) Time-eficient incremental learning. To address these obstacles, I developed a real-time Machine Learning Framework for Quickest Detection (MLQD). To be specific, I first propose the zero-bias neural network, which removes decision bias and preferabilities from regular neural networks and provides an interpretable decision process. Second, I discover the latent space characteristic of the zero-bias neural network and the method to mathematically convert a Deep Neural Network (DNN) classifier into a performance-assured binary abnormality detector. In this way, I can seamlessly integrate the deep neural networks\u27 data processing capability with Quickest Detection (QD) and provide real-time sequential event detection paradigm. Thirdly, after discovering that a critical factor that impedes the incremental learning of neural networks is the concept interference (confusion) in latent space, and I prove that to minimize interference, the concept representation vectors (class fingerprints) within the latent space need to be organized orthogonally and I invent a new incremental learning strategy using the findings, I facilitate deep neural networks in the CPS to evolve eficiently without retraining. All my algorithms are evaluated on real-world applications, ADS-B (Automatic Dependent Surveillance Broadcasting) signal identification, and spoofing detection in the aviation communication system. Finally, I discuss the current trends in MLQD and conclude this dissertation by presenting the future research directions and applications. As a summary, the innovations of this dissertation are as follows: i) I propose the zerobias neural network, which provides transparent latent space characteristics, I apply it to solve the wireless device identification problem. ii) I discover and prove the orthogonal memory organization mechanism in artificial neural networks and apply this mechanism in time-efficient incremental learning. iii) I discover and mathematically prove the converging point theorem, with which we can predict the latent space topological characteristics and estimate the topological maturity of neural networks. iv) I bridge the gap between machine learning and quickest detection with assurable performance
    corecore