50 research outputs found

    Network anomaly detection using management information base (MIB) network traffic variables

    Get PDF
    In this dissertation, a hierarchical, multi-tier, multiple-observation-window, network anomaly detection system (NADS) is introduced, namely, the MIB Anomaly Detection (MAD) system, which is capable of detecting and diagnosing network anomalies (including network faults and Denial of Service computer network attacks) proactively and adaptively. The MAD system utilizes statistical models and neural network classifier to detect network anomalies through monitoring the subtle changes of network traffic patterns. The process of measuring network traffic pattern is achieved by monitoring the Management Information Base (Mifi) II variables, supplied by the Simple Network Management Protocol (SNMP) LI. The MAD system then converted each monitored Mifi variable values, collected during each observation window, into a Probability Density Function (PDF), processed them statistically, combined intelligently the result for each individual variable and derived the final decision. The MAD system has a distributed, hierarchical, multi-tier architecture, based on which it could provide the health status of each network individual element. The inter-tier communication requires low network bandwidth, thus, making it possibly utilization on capacity challenged wireless as well as wired networks. Efficiently and accurately modeling network traffic behavior is essential for building NADS. In this work, a novel approach to statistically model network traffic measurements with high variability is introduced, that is, dividing the network traffic measurements into three different frequency segments and modeling the data in each frequency segment separately. Also in this dissertation, a new network traffic statistical model, i.e., the one-dimension hyperbolic distribution, is introduced

    Wavelet-based filtration procedure for denoising the predicted CO2 waveforms in smart home within the Internet of Things

    Get PDF
    The operating cost minimization of smart homes can be achieved with the optimization of the management of the building's technical functions by determination of the current occupancy status of the individual monitored spaces of a smart home. To respect the privacy of the smart home residents, indirect methods (without using cameras and microphones) are possible for occupancy recognition of space in smart homes. This article describes a newly proposed indirect method to increase the accuracy of the occupancy recognition of monitored spaces of smart homes. The proposed procedure uses the prediction of the course of CO2 concentration from operationally measured quantities (temperature indoor and relative humidity indoor) using artificial neural networks with a multilayer perceptron algorithm. The mathematical wavelet transformation method is used for additive noise canceling from the predicted course of the CO2 concentration signal with an objective increase accuracy of the prediction. The calculated accuracy of CO2 concentration waveform prediction in the additive noise-canceling application was higher than 98% in selected experiments.Web of Science203art. no. 62

    Finite-window RLS algorithms

    Get PDF

    Automated smoother for the numerical decoupling of dynamics models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure.</p> <p>Results</p> <p>In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from <it>in-vivo </it>NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations.</p> <p>Conclusion</p> <p>The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.</p

    Comparison of Methods for Smoothing Environmental Data with an Application to Particulate Matter PM10

    Get PDF
    Data smoothing is often required within the environmental data analysis. A number of methods and algorithms that can be applied for data smoothing have been proposed. This paper gives an overview and compares the performance of different smoothing procedures that estimate the trend in the data, based on the surrounding noisy observations that can be applied on environmental data. The considered methods include kernel regression with both global and local bandwidth, moving average, exponential smoothing, robust repeated median regression, trend filtering and approach based on discrete Fourier and discrete wavelet transform. The methods are applied to real data obtained by measurement of PM10 concentrations and compared in a simulation study.O

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    ECG Biometric Authentication: A Comparative Analysis

    Get PDF
    Robust authentication and identification methods become an indispensable urgent task to protect the integrity of the devices and the sensitive data. Passwords have provided access control and authentication, but have shown their inherent vulnerabilities. The speed and convenience factor are what makes biometrics the ideal authentication solution as they could have a low probability of circumvention. To overcome the limitations of the traditional biometric systems, electrocardiogram (ECG) has received the most attention from the biometrics community due to the highly individualized nature of the ECG signals and the fact that they are ubiquitous and difficult to counterfeit. However, one of the main challenges in ECG-based biometric development is the lack of large ECG databases. In this paper, we contribute to creating a new large gallery off-the-person ECG datasets that can provide new opportunities for the ECG biometric research community. We explore the impact of filtering type, segmentation, feature extraction, and health status on ECG biometric by using the evaluation metrics. Our results have shown that our ECG biometric authentication outperforms existing methods lacking the ability to efficiently extract features, filtering, segmentation, and matching. This is evident by obtaining 100% accuracy for PTB, MIT-BHI, CEBSDB, CYBHI, ECG-ID, and in-house ECG-BG database in spite of noisy, unhealthy ECG signals while performing five-fold cross-validation. In addition, an average of 2.11% EER among 1,694 subjects is obtained

    Deciphering Surfaces of Trans-Neptunian and Kuiper Belt Objects using Radiative Scattering Models, Machine Learning, and Laboratory Experiments

    Get PDF
    Decoding surface-atmospheric interactions and volatile transport mechanisms on trans-Neptunian objects (TNOs) and Kuiper Belt objects (KBOs) involves an in-depth understanding of physical and thermal properties and spatial distribution of surface constituents – nitrogen (N2), methane (CH4), carbon monoxide (CO), and water (H2O) ices. This thesis implements a combination of radiative scattering models, machine learning techniques, and laboratory experiments to investigate the uncertainties in grain size estimation of ices, the spatial distribution of surface compositions on Pluto, and the thermal properties of volatiles found on TNOs and KBOs. Radiative scattering models (Mie theory and Hapke approximations) were used to compare single scattering albedos of N2, CH4, and H2O ices from their optical constants at near-infrared wavelengths (1 – 5 µm). Based on the results of Chapters 2 and 3, this thesis recommends using the Mie model for unknown spectra of outer solar system bodies in estimating grain sizes of surface ices. When using an approximation for radiative transfer models (RTMs), we recommend using the Hapke slab approximation model over the internal scattering model. In Chapter 4, this thesis utilizes near-infrared (NIR) spectral observations of the LEISA/Ralph instrument onboard NASA’s New Horizons spacecraft. Hyperspectral LEISA data were used to map the geographic distribution of ices on Pluto’s surface by implementing the principal component reduced Gaussian mixture model (PC-GMM), an unsupervised machine learning technique. The distribution of ices reveals a latitudinal pattern with distinct surface compositions of volatiles. The PC-GMM method was able to recognize local-scale variations in surface compositions of geological features. The mapped distribution of surface units and their compositions are consistent with existing literature and help in an improved understanding of the volatile transport mechanism on the dwarf planet. In Chapter 5, we propose a method to estimate thermal conductivity, volumetric heat capacity, thermal diffusivity, and thermal inertia of N2, CH4, and CO ices, and mixtures thereof in a simulated laboratory setting at temperatures of 20 to 60 K – relevant to TNOs and KBOs. A new laboratory experimental facility – named the Outer Solar System Astrophysics Lab (OSSAL) – was built to implement the proposed method. This thesis provides detailed technical specifications of that laboratory with an emphasis on facilitating the design of similar cryogenic facilities in the future. Thus, this research was able to incorporate a set of methods, tools, and techniques for an improved understanding of ices found in the Kuiper Belt and to decipher surface-atmospheric interactions and volatile transport mechanisms on planetary bodies in the outer solar system

    Deciphering Surfaces of Trans-Neptunian and Kuiper Belt Objects using Radiative Scattering Models, Machine Learning, and Laboratory Experiments

    Get PDF
    Decoding surface-atmospheric interactions and volatile transport mechanisms on trans-Neptunian objects (TNOs) and Kuiper Belt objects (KBOs) involves an in-depth understanding of physical and thermal properties and spatial distribution of surface constituents – nitrogen (N2), methane (CH4), carbon monoxide (CO), and water (H2O) ices. This thesis implements a combination of radiative scattering models, machine learning techniques, and laboratory experiments to investigate the uncertainties in grain size estimation of ices, the spatial distribution of surface compositions on Pluto, and the thermal properties of volatiles found on TNOs and KBOs. Radiative scattering models (Mie theory and Hapke approximations) were used to compare single scattering albedos of N2, CH4, and H2O ices from their optical constants at near-infrared wavelengths (1 – 5 µm). Based on the results of Chapters 2 and 3, this thesis recommends using the Mie model for unknown spectra of outer solar system bodies in estimating grain sizes of surface ices. When using an approximation for radiative transfer models (RTMs), we recommend using the Hapke slab approximation model over the internal scattering model. In Chapter 4, this thesis utilizes near-infrared (NIR) spectral observations of the LEISA/Ralph instrument onboard NASA’s New Horizons spacecraft. Hyperspectral LEISA data were used to map the geographic distribution of ices on Pluto’s surface by implementing the principal component reduced Gaussian mixture model (PC-GMM), an unsupervised machine learning technique. The distribution of ices reveals a latitudinal pattern with distinct surface compositions of volatiles. The PC-GMM method was able to recognize local-scale variations in surface compositions of geological features. The mapped distribution of surface units and their compositions are consistent with existing literature and help in an improved understanding of the volatile transport mechanism on the dwarf planet. In Chapter 5, we propose a method to estimate thermal conductivity, volumetric heat capacity, thermal diffusivity, and thermal inertia of N2, CH4, and CO ices, and mixtures thereof in a simulated laboratory setting at temperatures of 20 to 60 K – relevant to TNOs and KBOs. A new laboratory experimental facility – named the Outer Solar System Astrophysics Lab (OSSAL) – was built to implement the proposed method. This thesis provides detailed technical specifications of that laboratory with an emphasis on facilitating the design of similar cryogenic facilities in the future. Thus, this research was able to incorporate a set of methods, tools, and techniques for an improved understanding of ices found in the Kuiper Belt and to decipher surface-atmospheric interactions and volatile transport mechanisms on planetary bodies in the outer solar system
    corecore