14 research outputs found

    Prediction and Detection of Rheumatoid Arthritis SNPs Using Neural Networks

    Get PDF
    Abstract: Rheumatic Arthritis (RA) is the most common disease found in the majorit

    Deep learning framework for detection of hypoglycemic episodes in children with type 1 diabetes

    Full text link
    © 2016 IEEE. Most Type 1 diabetes mellitus (T1DM) patients have hypoglycemia problem. Low blood glucose, also known as hypoglycemia, can be a dangerous and can result in unconsciousness, seizures and even death. In recent studies, heart rate (HR) and correct QT interval (QTc) of the electrocardiogram (ECG) signal are found as the most common physiological parameters to be effected from hypoglycemic reaction. In this paper, a state-of-the-art intelligent technology namely deep belief network (DBN) is developed as an intelligent diagnostics system to recognize the onset of hypoglycemia. The proposed DBN provides a superior classification performance with feature transformation on either processed or un-processed data. To illustrate the effectiveness of the proposed hypoglycemia detection system, 15 children with Type 1 diabetes were volunteered overnight. Comparing with several existing methodologies, the experimental results showed that the proposed DBN outperformed and achieved better classification performance

    EDMON - Electronic Disease Surveillance and Monitoring Network: A Personalized Health Model-based Digital Infectious Disease Detection Mechanism using Self-Recorded Data from People with Type 1 Diabetes

    Get PDF
    Through time, we as a society have been tested with infectious disease outbreaks of different magnitude, which often pose major public health challenges. To mitigate the challenges, research endeavors have been focused on early detection mechanisms through identifying potential data sources, mode of data collection and transmission, case and outbreak detection methods. Driven by the ubiquitous nature of smartphones and wearables, the current endeavor is targeted towards individualizing the surveillance effort through a personalized health model, where the case detection is realized by exploiting self-collected physiological data from wearables and smartphones. This dissertation aims to demonstrate the concept of a personalized health model as a case detector for outbreak detection by utilizing self-recorded data from people with type 1 diabetes. The results have shown that infection onset triggers substantial deviations, i.e. prolonged hyperglycemia regardless of higher insulin injections and fewer carbohydrate consumptions. Per the findings, key parameters such as blood glucose level, insulin, carbohydrate, and insulin-to-carbohydrate ratio are found to carry high discriminative power. A personalized health model devised based on a one-class classifier and unsupervised method using selected parameters achieved promising detection performance. Experimental results show the superior performance of the one-class classifier and, models such as one-class support vector machine, k-nearest neighbor and, k-means achieved better performance. Further, the result also revealed the effect of input parameters, data granularity, and sample sizes on model performances. The presented results have practical significance for understanding the effect of infection episodes amongst people with type 1 diabetes, and the potential of a personalized health model in outbreak detection settings. The added benefit of the personalized health model concept introduced in this dissertation lies in its usefulness beyond the surveillance purpose, i.e. to devise decision support tools and learning platforms for the patient to manage infection-induced crises

    Design of an artificial neural network and feature extraction to identify arrhythmias from ECG

    Get PDF
    This paper presents a design of an artificial neural network (ANN) and feature extraction methods to identify two types of arrhythmias in datasets obtained through electrocardiography (ECG) signals, namely arrhythmia dataset (AD) and supraventricular arrhythmia dataset (SAD). No special ANN toolkit was used; instead, each neuron and necessary calculus were modeled and individually programmed. Thus, four temporal-based features are used: heart rate (HR), R-peaks root mean square (R-RMS), RR-peaks variance (RR-VAR), and QSR-complex standard deviation (QSR-SD). The network architecture presents four neurons in the input layer, eight in hidden layer and an output layer with two neurons. The proposed classification method uses the MIT-BIH Dataset (Massachusetts Institute of Technology-Beth Israel Hospital) for training, validation and execution or test phases. Preliminary results show the high efficiency of the proposed ANN design and its classification method, reaching accuracies between 98.76% and 98.91%, when in the identification of NSRD and arrhythmic ECG; and accuracies of 86.37% (AD) and 76.35% (SAD), when analyzing only classifications between both arrhythmias.info:eu-repo/semantics/acceptedVersio

    Quality and robustness improvement for real world industrial systems using a fuzzy particle swarm optimization

    Get PDF
    This paper presents a novel fuzzy particle swarm optimization with cross-mutated (FPSOCM) operation, where a fuzzy logic system developed based on the knowledge of swarm intelligence is proposed to determine the inertia weight for the swarm movement of particle swarm optimization (PSO) and the control parameter of a newly introduced cross-mutated operation. Hence, the inertia weight of the PSO can be adaptive with respect to the search progress. The new cross-mutated operation intends to drive the solution to escape from local optima. A suite of benchmark test functions are employed to evaluate the performance of the proposed FPSOCM. Experimental results show empirically that the FPSOCM performs better than the existing hybrid PSO methods in terms of solution quality, robustness, and convergence rate. The proposed FPSOCM is evaluated by improving the quality and robustness of two real world industrial systems namely economic load dispatch system and self-provisioning systems for communication network services. These two systems are employed to evaluate the effectiveness of the proposed FPSOCM as they are multi-optima and non-convex problems. The performance of FPSOCM is found to be significantly better than that of the existing hybrid PSO methods in a statistical sense. These results demonstrate that the proposed FPSOCM is a good candidate for solving product or service engineering problems which have multi-optima or non-convex natures

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    A Flexible Fuzzy Regression Method for Addressing Nonlinear Uncertainty on Aesthetic Quality Assessments

    Get PDF
    Development of new products or services requires knowledge and understanding of aesthetic qualities that correlate to perceptual pleasure. As it is not practical to develop a survey to assess aesthetic quality for all objective features of a new product or service, it is necessary to develop a model to predict aesthetic qualities. In this paper, a fuzzy regression method is proposed to predict aesthetic quality from a given set of objective features and to account for uncertainty in human assessment. The proposed method overcomes the shortcoming of statistical regression, which can predict only quality magnitudes but cannot predict quality uncertainty. The proposed method also attempts to improve traditional fuzzy regressions, which simulate a single characteristic with which the estimated uncertainty can only increase with the increasing magnitudes of objective features. The proposed fuzzy regression method uses genetic programming to develop nonlinear structures of the models, and model coefficients are determined by optimizing the fuzzy criteria. Hence, the developed model can be used to fit the nonlinearities of sample magnitudes and uncertainties. The effectiveness and the performance of the proposed method are evaluated by the case study of perceptual images, which are involved with different sampling natures and with different amounts of samples. This case study attempts to address different characteristics of human assessments. The outcomes demonstrate that more robust models can be developed by the proposed fuzzy regression method compared with the recently developed fuzzy regression methods, when the model characteristics and fuzzy criteria are taken into account

    Complex event recognition through wearable sensors

    Get PDF
    Complex events are instrumental in understanding advanced behaviours and properties of a system. They can represent more meaningful events as compared to simple events. In this thesis we propose to use wearable sensor signals to detect complex events. These signals are pertaining to the user's state and therefore allow us to understand advanced characteristics about her. We propose a hierarchical approach to detect simple events from the wearable sensors data and then build complex events on top of them. In order to address privacy concerns that rise from the use of sensitive signals, we propose to perform all the computation on device. While this ensures the privacy of the data, it poses the problem of having limited computational resources. This problem is tackled by introducing energy efficient approaches based on incremental algorithms. A second challenge is the multiple levels of noise in the process. A first level of noise concerns the raw signals that are inherently imprecise (e.g. inaccuracy in GPS readings). A second level of noise, that we call semantic noise, is present among the simple events detected. Some of these simple events can disturb the detection of complex events effectively acting as noise. We apply the hierarchical approach in two different contexts defining the two different parts of our thesis. In the first part, we present a mobile system that builds a representation of the user's life. This system is based on the episodic memory model, which is responsible for the storage and recollection of past experiences. Following the hierarchical approach, the system processes raw signals to detect simple events such as places where the user stayed a certain amount of time to perform an activity, therefore building sequences of detected activities. These activities are in turn processed to detect complex events that we call routines and that represent recurrent patterns in the life of the user. In the second part of this thesis, we focus on the detection of glycemic events for diabetes type-1 patients in a non-invasive manner. Diabetics are not able to properly regulate their glucose, leading to periods of high and low blood sugar. We leverage signals (Electrocardiogram (ECG), accelerometer, breathing rate) from a sport belt to infer such glycemic events. We propose a physiological model based on the variations of the ECG when the patient has low blood sugar, and an energy-based model that computes the current glucose level of the user based on her glucose intake, insulin intake and glucose consumption via physical activity. For both contexts, we evaluate our systems in term of accuracy by assessing wether the detected routines are meaningful, and wether the glycemic events are correctly detected, and in term of mobile performance, which confirms the fitness of our approaches for mobile computation

    Modularity in artificial neural networks

    Get PDF
    Artificial neural networks are deep machine learning models that excel at complex artificial intelligence tasks by abstracting concepts through multiple layers of feature extraction. Modular neural networks are artificial neural networks that are composed of multiple subnetworks called modules. The study of modularity has a long history in the field of artificial neural networks and many of the actively studied models in the domain of artificial neural networks have modular aspects. In this work, we aim to formalize the study of modularity in artificial neural networks and outline how modularity can be used to enhance some neural network performance measures. We do an extensive review of the current practices of modularity in the literature. Based on that, we build a framework that captures the essential properties characterizing the modularization process. Using this modularization framework as an anchor, we investigate the use of modularity to solve three different problems in artificial neural networks: balancing latency and accuracy, reducing model complexity and increasing robustness to noise and adversarial attacks. Artificial neural networks are high-capacity models with high data and computational demands. This represents a serious problem for using these models in environments with limited computational resources. Using a differential architectural search technique, we guide the modularization of a fully-connected network into a modular multi-path network. By evaluating sampled architectures, we can establish a relation between latency and accuracy that can be used to meet a required soft balance between these conflicting measures. A related problem is reducing the complexity of neural network models while minimizing accuracy loss. CapsNet is a neural network architecture that builds on the ideas of convolutional neural networks. However, the original architecture is shallow and has wide layers that contribute significantly to its complexity. By replacing the early wide layers by parallel deep independent paths, we can significantly reduce the complexity of the model. Combining this modular architecture with max-pooling, DropCircuit regularization and a modified variant of the routing algorithm, we can achieve lower model latency with the same or better accuracy compared to the baseline. The last problem we address is the sensitivity of neural network models to random noise and to adversarial attacks, a highly disruptive form of engineered noise. Convolutional layers are the basis of state-of-the-art computer vision models and, much like other neural network layers, they suffer from sensitivity to noise and adversarial attacks. We introduce the weight map layer, a modular layer based on the convolutional layer, that can increase model robustness to noise and adversarial attacks. We conclude our work by a general discussion about the investigated relation between modularity and the addressed problems and potential future research directions
    corecore