185 research outputs found

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Get PDF
    Swarm intelligence (SI) is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS) as well as the singular spectrum analysis (SSA), time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA) and support vector regression (SVR) in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance

    Polynomial fitting and total variation based techniques on 1-D and 2-D signal denoising

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 167-176.New techniques are developed for signal denoising and texture recovery. Geometrical theory of total variation (TV) is explored, and an algorithm that uses quadratic programming is introduced for total variation reduction. To minimize the staircase effect associated with commonly used total variation based techniques, robust algorithms are proposed for accurate localization of transition boundaries. For this boundary detection problem, three techniques are proposed. In the first method, the 1−D total variation is applied in first derivative domain. This technique is based on the fact that total variation forms piecewise constant parts and the constant parts in the derivative domain corresponds to lines in time domain. The boundaries of these constant parts are used as the transition boundaries for the line fitting. In the second technique proposed for boundary detection, a wavelet based technique is proposed. Since the mother wavelet can be used to detect local abrupt changes, the Haar wavelet function is used for the purpose of boundary detection. Convolution of a signal or its derivative family with this Haar mother wavelet gives responses at the edge locations, attaining local maxima. A basic local maximization technique is used to find the boundary locations. The last technique proposed for boundary detection is the well known Particle Swarm Optimization (PSO). The locations of the boundaries are randomly perturbed yielding an error for each set of boundaries. Pursuing the personal and global best positions, the boundary locations converge to a set of boundaries. In all of the techniques, polynomial fitting is applied to the part of the signal between the edges. A more complicated scenario for 1−D signal denoising is texture recovery. In the technique proposed in this thesis, the periodicity of the texture is exploited. Periodic and non-periodic parts are distinguished by examining total variation of the autocorrelation of the signal. In the periodic parts, the period size was found by PSO evolution. All the periods were averaged to remove the noise, and the final signal was synthesized. For the purpose of image denoising, optimum one dimensional total variation minimization is carried to two dimensions by Radon transform and slicing method. In the proposed techniques, the stopping criterion for the procedures is chosen as the error norm. The processes are stopped when the residual norm is comparable to noise standard deviation. 1−D and 2−D noise statistics estimation methods based on Maximum Likelihood Estimation (MLE) are presented. The proposed denoising techniques are compared with principal curve projection technique, total variation by Rudin et al, total variation by Willsky et al, and curvelets. The simulations show that our techniques outperform these widely used techniques in the literature.Yıldız, AykutM.S

    Improved Wavelet Threshold for Image De-noising

    Get PDF
    With the development of communication technology and network technology, as well as the rising popularity of digital electronic products, an image has become an important carrier of access to outside information. However, images are vulnerable to noise interference during collection, transmission and storage, thereby decreasing image quality. Therefore, image noise reduction processing is necessary to obtain higher-quality images. For the characteristics of its multi-analysis, relativity removal, low entropy, and flexible bases, the wavelet transform has become a powerful tool in the field of image de-noising. The wavelet transform in application mathematics has a rapid development. De-noising methods based on wavelet transform is proposed and achieved with good results, but shortcomings still remain. Traditional threshold functions have some deficiencies in image de-noising. A hard threshold function is discontinuous, whereas a soft threshold function causes constant deviation. To address these shortcomings, a method for removing image noise is proposed in this paper. First, the method decomposes the noise image to determine the wavelet coefficients. Second, the wavelet coefficient is applied on the high-frequency part of the threshold processing by using the improved threshold function. Finally, the de-noised images are obtained to rebuild the images in accordance with the estimation in the wavelet-based conditions. Experiment results show that this method, discussed in this paper, is better than traditional hard threshold de-noising and soft threshold de-noising methods, in terms of objective effects and subjective visual effects

    Wind power prediction based on WT-BiGRU-attention-TCN model

    Get PDF
    Accurate wind power prediction is crucial for the safe and stable operation of the power grid. However, wind power generation has large random volatility and intermittency, which increases the difficulty of prediction. In order to construct an effective prediction model based on wind power generation power and achieve stable grid dispatch after wind power is connected to the grid, a wind power generation prediction model based on WT-BiGRU-Attention-TCN is proposed. First, wavelet transform (WT) is used to reduce noises of the sample data. Then, the temporal attention mechanism is incorporated into the bi-directional gated recurrent unit (BiGRU) model to highlight the impact of key time steps on the prediction results while fully extracting the temporal features of the context. Finally, the model performance is enhanced by further extracting more high-level temporal features through a temporal convolutional neural network (TCN). The results show that our proposed model outperforms other baseline models, achieving a root mean square error of 0.066 MW, a mean absolute percentage error of 18.876%, and the coefficient of determination (R2) reaches 0.976. It indicates that the noise-reduction WT technique can significantly improve the model performance, and also shows that using the temporal attention mechanism and TCN can further improve the prediction accuracy

    Image processing and machine learning techniques used in computer-aided detection system for mammogram screening - a review

    Get PDF
    This paper aims to review the previously developed Computer-aided detection (CAD) systems for mammogram screening because increasing death rate in women due to breast cancer is a global medical issue and it can be controlled only by early detection with regular screening. Till now mammography is the widely used breast imaging modality. CAD systems have been adopted by the radiologists to increase the accuracy of the breast cancer diagnosis by avoiding human errors and experience related issues. This study reveals that in spite of the higher accuracy obtained by the earlier proposed CAD systems for breast cancer diagnosis, they are not fully automated. Moreover, the false-positive mammogram screening cases are high in number and over-diagnosis of breast cancer exposes a patient towards harmful overtreatment for which a huge amount of money is being wasted. In addition, it is also reported that the mammogram screening result with and without CAD systems does not have noticeable difference, whereas the undetected cancer cases by CAD system are increasing. Thus, future research is required to improve the performance of CAD system for mammogram screening and make it completely automated

    Breast cancer diagnosis: a survey of pre-processing, segmentation, feature extraction and classification

    Get PDF
    Machine learning methods have been an interesting method in the field of medical for many years, and they have achieved successful results in various fields of medical science. This paper examines the effects of using machine learning algorithms in the diagnosis and classification of breast cancer from mammography imaging data. Cancer diagnosis is the identification of images as cancer or non-cancer, and this involves image preprocessing, feature extraction, classification, and performance analysis. This article studied 93 different references mentioned in the previous years in the field of processing and tries to find an effective way to diagnose and classify breast cancer. Based on the results of this research, it can be concluded that most of today’s successful methods focus on the use of deep learning methods. Finding a new method requires an overview of existing methods in the field of deep learning methods in order to make a comparison and case study
    corecore