560 research outputs found

    A Linear Multi-User Detector for STBC MC-CDMA Systems based on the Adaptive Implementation of the Minimum-Conditional Bit-Error-Rate Criterion and on Genetic Algorithm-assisted MMSE Channel Estimation

    Get PDF
    The implementation of efficient baseband receivers characterized by affordable computational load is a crucial point in the development of transmission systems exploiting diversity in different domains. In this paper, we are proposing a linear multi-user detector for MIMO MC-CDMA systems with Alamouti’s Space-Time Block Coding, inspired by the concept of Minimum Conditional Bit-Error-Rate (MCBER) and relying on Genetic-Algorithm (GA)-assisted MMSE channel estimation. The MCBER combiner has been implemented in adaptive way by using Least-Mean-Square (LMS) optimization. Firstly, we shall analyze the proposed adaptive MCBER MUD receiver with ideal knowledge of Channel Status Information (CSI). Afterwards, we shall consider the complete receiver structure, encompassing also the non-ideal GA-assisted channel estimation. Simulation results evidenced that the proposed MCBER receiver always outperforms state-of-the-art receiver schemes based on EGC and MMSE criterion exploiting the same degree of channel knowledge (i.e. ideal or estimated CSI)

    MIMO-aided near-capacity turbo transceivers: taxonomy and performance versus complexity

    No full text
    In this treatise, we firstly review the associated Multiple-Input Multiple-Output (MIMO) system theory and review the family of hard-decision and soft-decision based detection algorithms in the context of Spatial Division Multiplexing (SDM) systems. Our discussions culminate in the introduction of a range of powerful novel MIMO detectors, such as for example Markov Chain assisted Minimum Bit-Error Rate (MC-MBER) detectors, which are capable of reliably operating in the challenging high-importance rank-deficient scenarios, where there are more transmitters than receivers and hence the resultant channel-matrix becomes non-invertible. As a result, conventional detectors would exhibit a high residual error floor. We then invoke the Soft-Input Soft-Output (SISO) MIMO detectors for creating turbo-detected two- or three-stage concatenated SDM schemes and investigate their attainable performance in the light of their computational complexity. Finally, we introduce the powerful design tools of EXtrinsic Information Transfer (EXIT)-charts and characterize the achievable performance of the diverse near- capacity SISO detectors with the aid of EXIT charts

    Assisted History Matching by Using Recursive Least Square and Discrete Cosine Transform

    Get PDF
    History matching is the act of adjusting a model of a reservoir until it closely reproduces the past behavior of a reservoir. Before the computer was invented, history matching is done manually by trial and error method and personal judgment which only can be done by experienced engineer. Because of these factors, manual history matching technique consume a lot of time. Thus, this project is carried to do assisted history matching and to determine whether Recursive least square as optimization method and Discrete cosine transform as parameter reduction method can be combined or not. In order to achieve the objective, a number of steps have been done. First, a synthetic model has been built by modifying ODEH data. Two sets of permeability value have been selected to get two data which are historical and simulated data from the model. Then, the fluid flow equation is derived to get the forward model. Forward model is then used to design the objective function. After objective function has been designed, DCT is applied to the reservoir data in order to minimize the number of parameters. Next, RLS is applied to the parameter which has been reduced to optimize the data. These steps are repeated until the threshold value is lower than the set threshold. For this project, RLS and DCT methods are compared with the literature review to know the successfulness of this combination. For the first part of final year project, the derivation of forward model has been done. For the second part of final year project, a synthetic model has been built and objective function has been design from the forward model. Before applying the RLS and DCT, the illustrations of these methods need to be done to see how it work and then RLS and DCT were applied to the reservoir data. The outcomes at the end of this project are first two set of data is obtain from the synthetic model which are historical and simulated data. Then algorithms for DCT and v RLS are proposed which can be applied to the history matching problem. From the result, the combination of RLS and DCT are successful and can be used for history matching purpose

    A Study Model Predictive Control for Spark Ignition Engine Management and Testing

    Get PDF
    Pressure to improve spark-ignition (SI) engine fuel economy has driven thedevelopment and integration of many control actuators, creating complex controlsystems. Integration of a high number of control actuators into traditional map basedcontrollers creates tremendous challenges since each actuator exponentially increasescalibration time and investment. Model Predictive Control (MPC) strategies have thepotential to better manage this high complexity since they provide near-optimal controlactions based on system models. This research work focuses on investigating somepractical issues of applying MPC with SI engine control and testing.Starting from one dimensional combustion phasing control using spark timing(SPKT), this dissertation discusses challenges of computing the optimal control actionswith complex engine models. A nonlinear optimization is formulated to compute thedesired spark timing in real time, while considering knock and combustion variationconstraints. Three numerical approaches are proposed to directly utilize complex high-fidelity combustion models to find the optimal SPKT. A model based combustionphasing estimator that considers the influence of cycle-by-cycle combustion variations isalso integrated into the control system, making feedback and adaption functions possible.An MPC based engine management system with a higher number of controldimensions is also investigated. The control objective is manipulating throttle, externalEGR valve and SPKT to provide demanded torque (IMEP) output with minimum fuelconsumption. A cascaded control structure is introduced to simplify the formulation and solution of the MPC problem that solves for desired control actions. Sequential quadratic programming (SQP) MPC is applied to solve the nonlinear optimization problem in real time. A real-time linearization technique is used to formulate the sub-QP problems with the complex high dimensional engine system. Techniques to simplify the formulation of SQP and improve its convergence performance are also discussed in the context of tracking MPC. Strategies to accelerate online quadratic programming (QP) are explored. It is proposed to use pattern recognition techniques to “warm-start” active set QP algorithms for general linear MPC applications. The proposed linear time varying (LTV) MPC is used in Engine-in-Loop (EIL) testing to mimic the pedal actuations of human drivers who foresee the incoming traffic conditions. For SQP applications, the MPC is initialized with optimal control actions predicted by an ANN. Both proposed MPC methods significantly reduce execution time with minimal additional memory requirement

    Heart motion prediction based on adaptive estimation aşgorithms fo robotic-assisted beating heart surgery

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 90-93.Robotic assisted beating heart surgery aims to allow surgeons to operate on a beating heart without stabilizers as if the heart is stationary. The robot actively cancels heart motion by closely following a point of interest (POI) on the heart surface—a process called Active Relative Motion Canceling (ARMC). Due to the high bandwidth of the POI motion, it is necessary to supply the controller with an estimate of the immediate future of the POI motion over a prediction horizon in order to achieve sufficient tracking accuracy. In this thesis two prediction algorithms, using an adaptive filter to generate future position estimates, are studied. In addition, the variation in heart rate on tracking performance is studied and the prediction algorithms are evaluated using a 3 degrees of freedom test-bed with prerecorded heart motion data. Besides this, a probabilistic robotics approach is followed to model and characterize noise of the sensor system that collects heart motion data used in this study. The generated model is employed to filter and clean the noisy measurements collected from the sensor system. Then, the filtered sensor data is used to localize POI on the heart surface accurately. Finally, estimates obtained from the adaptive prediction algorithms are integrated to the generated measurement model with the aim of improving the performance of the presented approach.Tuna, Eser ErdemM.S

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    ECG noise reduction technique using Antlion Optimizer (ALO) for heart rate monitoring devices

    Get PDF
    The electrocardiogram (ECG) signal is susceptible to noise and artifacts and it is essential to remove the noise in order to support any decision making for specialist and automatic heart disorder diagnosis systems. In this paper, the use of Antlion Optimization (ALO) for optimizing and identifying the cutoff frequercy of ECG signal for low-pass filtering is investigated. Generally, the spectrums of the ECG signal are extracted from two classes: arrhythmia and supraventricular. Baseline wander is removed using the moving median filter. A dataset of the extracted features of the ECG spectrums is used to train the ALO. The performance of the ALO with various parameters is investigated. The ALO-identified cutoff frequency is applied to a Finite Impulse Response (FIR) filter and the resulting signal is evaluated against the original clean and conventional filtered ECG signals. The results show that the intelligent AL0-based system successfully denoised the ECG signals more effectively than the conventional method. The percentage of the accuracy increased by 2%

    Motion Artifact Processing Techniques for Physiological Signals

    Get PDF
    The combination of reducing birth rate and increasing life expectancy continues to drive the demographic shift toward an ageing population and this is placing an ever-increasing burden on our healthcare systems. The urgent need to address this so called healthcare \time bomb" has led to a rapid growth in research into ubiquitous, pervasive and distributed healthcare technologies where recent advances in signal acquisition, data storage and communication are helping such systems become a reality. However, similar to recordings performed in the hospital environment, artifacts continue to be a major issue for these systems. The magnitude and frequency of artifacts can vary signicantly depending on the recording environment with one of the major contributions due to the motion of the subject or the recording transducer. As such, this thesis addresses the challenges of the removal of this motion artifact removal from various physiological signals. The preliminary investigations focus on artifact identication and the tagging of physiological signals streams with measures of signal quality. A new method for quantifying signal quality is developed based on the use of inexpensive accelerometers which facilitates the appropriate use of artifact processing methods as needed. These artifact processing methods are thoroughly examined as part of a comprehensive review of the most commonly applicable methods. This review forms the basis for the comparative studies subsequently presented. Then, a simple but novel experimental methodology for the comparison of artifact processing techniques is proposed, designed and tested for algorithm evaluation. The method is demonstrated to be highly eective for the type of artifact challenges common in a connected health setting, particularly those concerned with brain activity monitoring. This research primarily focuses on applying the techniques to functional near infrared spectroscopy (fNIRS) and electroencephalography (EEG) data due to their high susceptibility to contamination by subject motion related artifact. Using the novel experimental methodology, complemented with simulated data, a comprehensive comparison of a range of artifact processing methods is conducted, allowing the identication of the set of the best performing methods. A novel artifact removal technique is also developed, namely ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA), which provides the best results when applied on fNIRS data under particular conditions. Four of the best performing techniques were then tested on real ambulatory EEG data contaminated with movement artifacts comparable to those observed during in-home monitoring. It was determined that when analysing EEG data, the Wiener lter is consistently the best performing artifact removal technique. However, when employing the fNIRS data, the best technique depends on a number of factors including: 1) the availability of a reference signal and 2) whether or not the form of the artifact is known. It is envisaged that the use of physiological signal monitoring for patient healthcare will grow signicantly over the next number of decades and it is hoped that this thesis will aid in the progression and development of artifact removal techniques capable of supporting this growth

    Assisted History Matching by Using Recursive Least Square and Discrete Cosine Transform

    Get PDF
    History matching is the act of adjusting a model of a reservoir until it closely reproduces the past behavior of a reservoir. Before the computer was invented, history matching is done manually by trial and error method and personal judgment which only can be done by experienced engineer. Because of these factors, manual history matching technique consume a lot of time. Thus, this project is carried to do assisted history matching and to determine whether Recursive least square as optimization method and Discrete cosine transform as parameter reduction method can be combined or not. In order to achieve the objective, a number of steps have been done. First, a synthetic model has been built by modifying ODEH data. Two sets of permeability value have been selected to get two data which are historical and simulated data from the model. Then, the fluid flow equation is derived to get the forward model. Forward model is then used to design the objective function. After objective function has been designed, DCT is applied to the reservoir data in order to minimize the number of parameters. Next, RLS is applied to the parameter which has been reduced to optimize the data. These steps are repeated until the threshold value is lower than the set threshold. For this project, RLS and DCT methods are compared with the literature review to know the successfulness of this combination. For the first part of final year project, the derivation of forward model has been done. For the second part of final year project, a synthetic model has been built and objective function has been design from the forward model. Before applying the RLS and DCT, the illustrations of these methods need to be done to see how it work and then RLS and DCT were applied to the reservoir data. The outcomes at the end of this project are first two set of data is obtain from the synthetic model which are historical and simulated data. Then algorithms for DCT and v RLS are proposed which can be applied to the history matching problem. From the result, the combination of RLS and DCT are successful and can be used for history matching purpose
    corecore