483 research outputs found

    Differentiation of Patients with Balance Insufficiency (Vestibular Hypofunction) versus Normal Subjects Using a Low-Cost Small Wireless Wearable Gait Sensor

    Get PDF
    Balance disorders present a significant healthcare burden due to the potential for hospitalization or complications for the patient, especially among the elderly population when considering intangible losses such as quality of life, morbidities, and mortalities. This work is a continuation of our earlier works where we now examine feature extraction methodology on Dynamic Gait Index (DGI) tests and machine learning classifiers to differentiate patients with balance problems versus normal subjects on an expanded cohort of 60 patients. All data was obtained using our custom designed low-cost wireless gait analysis sensor (WGAS) containing a basic inertial measurement unit (IMU) worn by each subject during the DGI tests. The raw gait data is wirelessly transmitted from the WGAS for real-time gait data collection and analysis. Here we demonstrate predictive classifiers that achieve high accuracy, sensitivity, and specificity in distinguishing abnormal from normal gaits. These results show that gait data collected from our very low-cost wearable wireless gait sensor can effectively differentiate patients with balance disorders from normal subjects in real-time using various classifiers. Our ultimate goal is to be able to use a remote sensor such as the WGAS to accurately stratify an individual’s risk for falls

    A novel customer churn prediction model for the telecommunication industry using data transformation methods and feature selection

    Get PDF
    Data transformation (DT) is a process that transfers the original data into a form which supports a particular classification algorithm and helps to analyze the data for a special purpose. To improve the prediction performance we investigated various data transform methods. This study is conducted in a customer churn prediction (CCP) context in the telecommunication industry (TCI), where customer attrition is a common phenomenon. We have proposed a novel approach of combining data transformation methods with the machine learning models for the CCP problem. We conducted our experiments on publicly available TCI datasets and assessed the performance in terms of the widely used evaluation measures (e.g. AUC, precision, recall, and F-measure). In this study, we presented comprehensive comparisons to affirm the effect of the transformation methods. The comparison results and statistical test proved that most of the proposed data transformation based optimized models improve the performance of CCP significantly. Overall, an efficient and optimized CCP model for the telecommunication industry has been presented through this manuscript.Comment: 24 page

    Approach to a Decision Support Method for Feature Engineering of a Classification of Hydraulic Directional Control Valve Tests

    Get PDF
    Advancing digitalization and high computing power are drivers for the progressive use of machine learning (ML) methods on manufacturing data. Using ML for predictive quality control of product characteristics contributes to preventing defects and streamlining future manufacturing processes. Challenging decisions must be made before implementing ML applications. Production environments are dynamic systems whose boundary conditions change continuously. Accordingly, it requires extensive feature engineering of the volatile database to guarantee high generalizability of the prediction model. Thus, all following sections of the ML pipeline can be optimized based on a cleaned database. Various ML methods such gradient boosting methods have achieved promising results in industrial hydraulic use cases so far. For every prediction model task, there is the challenge of making the right choice of which method is most appropriate and which hyperparameters achieve the best predictions. The goal of this work is to develop a method for selecting the best feature engineering methods and hyperparameter combination of a predictive model for a dataset with temporal variability that treats both as equivalent parameters and optimizes them simultaneously. The optimization is done via a workflow including a random search. By applying this method, a structured procedure for achieving significant leaps in performance metrics in the prediction of hydraulic test steps of directional valves is achieved

    Learning to Estimate Driver Drowsiness from Car Acceleration Sensors using Weakly Labeled Data

    Full text link
    This paper addresses the learning task of estimating driver drowsiness from the signals of car acceleration sensors. Since even drivers themselves cannot perceive their own drowsiness in a timely manner unless they use burdensome invasive sensors, obtaining labeled training data for each timestamp is not a realistic goal. To deal with this difficulty, we formulate the task as a weakly supervised learning. We only need to add labels for each complete trip, not for every timestamp independently. By assuming that some aspects of driver drowsiness increase over time due to tiredness, we formulate an algorithm that can learn from such weakly labeled data. We derive a scalable stochastic optimization method as a way of implementing the algorithm. Numerical experiments on real driving datasets demonstrate the advantages of our algorithm against baseline methods.Comment: Accepted by ICASSP202

    Novel Signal Reconstruction Techniques in Cyclotron Radiation Emission Spectroscopy for Neutrino Mass Measurement

    Get PDF
    The Project 8 experiment is developing Cyclotron Radiation Emission Spectroscopy (CRES) on the beta-decay spectrum of tritium for the measurement of the absolute neutrino mass scale. CRES is a frequency-based technique which aims to probe the endpoint of the tritium energy spectrum with a final target sensitivity of 0.04 eV, pushing the limits beyond the inverted mass hierarchy. A phased-approach experiment, both Phase I and Phase II efforts use a combination of 83mKr and molecular tritium T_2 as source gases. The technique relies on an accurate, precise, and well-understood reconstructed beta-spectrum whose endpoint and spectral shape near the endpoint may be constrained by a kinematical model which uses the neutrino mass m_beta as a free parameter. Since the decays in the last eV of the tritium spectrum encompass O(10^(-13)) of all decays and the precise variation of the spectrum, distorted by the presence of a massive neutrino, is fundamental to the measurement, reconstruction techniques which yield accurate measurements of the frequency (and therefore energy) of the signal and correctly classify signal from background are necessary. In this work, we discuss the open-problem of the absolute neutrino mass scale, the fundamentals of measurements tailored to resolve this, the underpinning and details of the CRES technology, and the measurement of the first-ever CRES tritium β\beta-spectrum. Finally, we focus on novel reconstruction techniques at both the signal and event levels using machine learning algorithms that allow us to adapt our technique to the complex dynamics of the electron inside our detector. We will show that such methods can separate true events from backgrounds at \u3e 94% accuracy and are able to improve the efficiency of reconstruction when compared to traditional reconstruction methods by \u3e 23%

    Deep learning for identifying Lung Diseases

    Get PDF
    Growing health problems, such as lung diseases, especially for children and the elderly, require better diagnostic methods, such as computer-based solutions, and it is crucial to detect and treat these problems early. The purpose of this article is to design and implement a new computer vision-based algorithm based on lung disease diagnosis, which has better performance in lung disease recognition than previous models to reduce lung-related health problems and costs . In addition, we have improved the accuracy of the five lung diseases detection, which helps doctors and doctors use computers to solve this problem at an early stage

    UBI-XGB: IDENTIFICATION OF UBIQUITIN PROTEINS USING MACHINE LEARNING MODEL

    Get PDF
    A recent line of research has focused on Ubiquitination, a pervasive and proteasome-mediated protein degradation that controls apoptosis and is crucial in the breakdown of proteins and the development of cell disorders, is a major factor.  The turnover of proteins and ubiquitination are two related processes. We predict ubiquitination sites; these attributes are lastly fed into the extreme gradient boosting (XGBoost) classifier. We develop reliable predictors computational tool using experimental identification of protein ubiquitination sites is typically labor- and time-intensive. First, we encoded protein sequence features into matrix data using Dipeptide Deviation from Expected Mean (DDE) features encoding techniques. We also proposed 2nd features extraction model named dipeptide composition (DPC) model. It is vital to develop reliable predictors since experimental identification of protein ubiquitination sites is typically labor- and time-intensive. In this paper, we proposed computational method as named Ubipro-XGBoost, a multi-view feature-based technique for predicting ubiquitination sites. Recent developments in proteomic technology have sparked renewed interest in the identification of ubiquitination sites in a number of human disorders, which have been studied experimentally and clinically.  When more experimentally verified ubiquitination sites appear, we developed a predictive algorithm that can locate lysine ubiquitination sites in large-scale proteome data. This paper introduces Ubipro-XGBoost, a machine learning method. Ubipro-XGBoost had an AUC (area under the Receiver Operating Characteristic curve) of 0.914% accuracy, 0.836% Sensitivity, 0.992% Specificity, and 0.839% MCC on a 5-fold cross validation based on DPC model, and 2nd 0.909% accuracy, 0.839% Sensitivity, 0.979% Specificity, and 0. 0.829% MCC on a 5-fold cross validation based on DDE model. The findings demonstrate that the suggested technique, Ubipro-XGBoost, outperforms conventional ubiquitination prediction methods and offers fresh advice for ubiquitination site identification
    • …
    corecore