131,634 research outputs found

    Enhancing Confidentiality and Privacy Preservation in e-Health to Enhanced Security

    Get PDF
    Electronic health (e-health) system use is growing, which has improved healthcare services significantly but has created questions about the privacy and security of sensitive medical data. This research suggests a novel strategy to overcome these difficulties and strengthen the security of e-health systems while maintaining the privacy and confidentiality of patient data by utilising machine learning techniques. The security layers of e-health systems are strengthened by the comprehensive framework we propose in this paper, which incorporates cutting-edge machine learning algorithms. The suggested framework includes data encryption, access control, and anomaly detection as its three main elements. First, to prevent unauthorised access during transmission and storage, patient data is secured using cutting-edge encryption technologies. Second, to make sure that only authorised staff can access sensitive medical records, access control mechanisms are strengthened using machine learning models that examine user behaviour patterns. This research's inclusion of machine learning-based anomaly detection is its most inventive feature. The technology may identify variations from typical data access and usage patterns, thereby quickly spotting potential security breaches or unauthorised activity, by training models on past e-health data. This proactive strategy improves the system's capacity to successfully address new threats. Extensive experiments were carried out employing a broad dataset made up of real-world e-health scenarios to verify the efficacy of the suggested approach. The findings showed a marked improvement in the protection of confidentiality and privacy, along with a considerable decline in security breaches and unauthorised access events

    Novel Machine Learning and Wearable Sensor Based Solutions for Smart Healthcare Monitoring

    Get PDF
    The advent of IoT has enabled the design of connected and integrated smart health monitoring systems. These health monitoring systems can be utilized for monitoring the mental and physical wellbeing of a person. Stress, anxiety, and hypertension are the major elements responsible for the plethora of physical and mental illnesses. In this context, the older population demands special attention because of the several age-related complications that exacerbate the effects of stress, anxiety, and hypertension. Monitoring stress, anxiety, and blood pressure regularly can prevent long-term damage by initiating necessary intervention or clinical treatment beforehand. This will improve the quality of life and reduce the burden on caregivers and the cost of healthcare. Therefore, this thesis explores novel technological solutions for real-time monitoring of stress, anxiety, and blood pressure using unobtrusive wearable sensors and machine learning techniques. The first contribution of this thesis is the experimental data collection of 50 healthy older adults, based on which, the works on stress detection and anxiety detection have been developed. The data collection procedure lasted for more than a year. We have collected physiological signals, salivary cortisol, and self-reported questionnaire feedback during the study. Salivary cortisol is an established clinical biomarker for physiological stress. Hence, a stress detection model that is trained to distinguish between the stressed and not-stressed states as indicated by the increase in cortisol level has the potential to facilitate clinical level diagnosis of stress from the comfort of their own home. The second contribution of the thesis is the development of a stress detection model based on fingertip sensors. We have extracted features from Electrodermal Activity (EDA) and Blood Volume Pulse (BVP) signals obtained from fingertip EDA and Photoplethysmogram (PPG) sensors to train machine learning algorithms for distinguishing between stressed and not-stressed states. We have evaluated the performance of four traditional machine learning algorithms and one deep-learning-based Long Short-Term Memory (LSTM) classifier. Results and analysis showed that the proposed LSTM classifier performed equally well as the traditional machine learning models. The third contribution of the thesis is to evaluate an integrated system of wrist-worn sensors for stress detection. We have evaluated four signal streams, EDA, BVP, Inter-Beat Interval (IBI), and Skin Temperature (ST) signals from EDA, PPG, and ST sensors. A random forest classifier was used for distinguishing between the stressed and not-stressed states. Results and analysis showed that incorporating features from different signals was able to reduce the misclassification rate of the classifier. Further, we have also prototyped the integration of the proposed wristband-based stress detection system in a consumer end device with voice capabilities. The fourth contribution of the thesis is the design of an anxiety detection model that uses features from a single wearable sensor and a context feature to improve the performance of the classification model. Using a context feature instead of integrating other physiological features for improving the performance of the model can reduce the complexity and cost of the anxiety detection model. In our proposed work, we have used a simple experimental context feature to highlight the importance of context in the accurate detection of anxious states. Our results and analysis have shown that with the addition of the context-based feature, the classifier was able to reduce misclassification by increasing the confidence of the decision. The final and the fifth contribution of the thesis is the validation of a proposed computational framework for the blood pressure estimation model. The proposed framework uses features from the PPG signal to estimate the systolic and diastolic blood pressure values using advanced regression techniques

    Doctor of Philosophy

    Get PDF
    dissertationThe primary objective of cancer registries is to capture clinical care data of cancer populations and aid in prevention, allow early detection, determine prognosis, and assess quality of various treatments and interventions. Furthermore, the role of cancer registries is paramount in supporting cancer epidemiological studies and medical research. Existing cancer registries depend mostly on humans, known as Cancer Tumor Registrars (CTRs), to conduct manual abstraction of the electronic health records to find reportable cancer cases and extract other data elements required for regulatory reporting. This is often a time-consuming and laborious task prone to human error affecting quality, completeness and timeliness of cancer registries. Central state cancer registries take responsibility for consolidating data received from multiple sources for each cancer case and to assign the most accurate information. The Utah Cancer Registry (UCR) at the University of Utah, for instance, leads and oversees more than 70 cancer treatment facilities in the state of Utah to collect data for each diagnosed cancer case and consolidate multiple sources of information.Although software tools helping with the manual abstraction process exist, they mainly focus on cancer case findings based on pathology reports and do not support automatic extraction of other data elements such as TNM cancer stage information, an important prognostic factor required before initiating clinical treatment. In this study, I present novel applications of natural language processing (NLP) and machine learning (ML) to automatically extract clinical and pathological TNM stage information from unconsolidated clinical records of cancer patients available at the central Utah Cancer Registry. To further support CTRs in their manual efforts, I demonstrate a new approach based on machine learning to consolidate TNM stages from multiple records at the patient level

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Statistical Analysis And Machine Learning For Coal Classification For Rare Earth Elements + Y (REY)

    Get PDF
    Due to their exceptional properties, rare earth elements (REEs) are critical to technological innovation in renewable energy production, electronics, health care, and national defense. They make up key components for many applications in the above areas. Many countries rely upon rare earth element imports. The high demand for rare earth elements has led to the development of alternative methods for exploration and capture. Coal has been labeled a viable potential source of rare earth elements and yttrium (REY). Statistical evaluation of REY concentrations and the properties of various coal samples is critical for successful characterization. The USGS COALQUAL database Version 3.0 is an industry standard database for coal research that contains 7658 non-weathered, full-bed coal samples from the United States. 5485 of these samples contain a full spectrum of REY concentrations. The data quality in the COALQUAL database will be analyzed to ensure that the data is reliable, and characteristics will be analyzed using conventional statistical methodology. This methodology includes accounting for samples with REY concentrations below the lowest limits of detection. Mean concentrations for each REY will be adjusted to fit a distribution of mean REY concentrations from the National Coal Resources Data System (NCRDS) normalized by the Upper Continental Crust standard dataset of REY mean concentrations. All samples are classified as unpromising or promising using total rare earth oxide concentration and the ratio of critical REYs to excess REYs called the outlook coefficient. Machine learning is a powerful tool that can utilize data to classify new data points added to a database based on data attributes. A machine learning model was developed to use existing data from the COALQUAL database to train and test algorithms to classify coal samples as unpromising or promising based on the samples ASTM ash percentage. The 5485 adjusted coal samples from the COALQUAL database were used and subjected to synthetic minority over-sampling technique (SMOTE) to eliminate label bias, and imputing methods were used to format the data for computational purposes. The adjusted coal samples were tested amongst various machine learning algorithms for the best performance. Accuracy and the number of false positives were the key performance indicators used to test each algorithm. The k-nearest neighbors (KNN) algorithm emerged as the best performer with 92% accuracy and 2% false positives. A brief economic analysis is included to justify using the model to save costs associated with obtaining trace element concentrations from laboratory analysis. Recommendations are given with details on how to utilize this research for future endeavors
    • …
    corecore