75 research outputs found

    Intelligent Pattern Analysis of the Foetal Electrocardiogram

    Get PDF
    The aim of the project on which this thesis is based is to develop reliable techniques for foetal electrocardiogram (ECG) based monitoring, to reduce incidents of unnecessary medical intervention and foetal injury during labour. World-wide electronic foetal monitoring is based almost entirely on the cardiotocogram (CTG), which is a continuous display of the foetal heart rate (FHR) pattern together with the contraction of the womb. Despite the widespread use of the CTG, there is no significant improvement in foetal outcome. In the UK alone it is estimated that birth related negligence claims cost the health authorities over £400M per-annum. An expert system, known as INFANT, has recently been developed to assist CTG interpretation. However, the CTG alone does not always provide all the information required to improve the outcome of labour. The widespread use of ECG analysis has been hindered by the difficulties with poor signal quality and the difficulties in applying the specialised knowledge required for interpreting ECG patterns, in association with other events in labour, in an objective way. A fundamental investigation and development of optimal signal enhancement techniques that maximise the available information in the ECG signal, along with different techniques for detecting individual waveforms from poor quality signals, has been carried out. To automate the visual interpretation of the ECG waveform, novel techniques have been developed that allow reliable extraction of key features and hence allow a detailed ECG waveform analysis. Fuzzy logic is used to automatically classify the ECG waveform shape using these features by using knowledge that was elicited from expert sources and derived from example data. This allows the subtle changes in the ECG waveform to be automatically detected in relation to other events in labour, and thus improve the clinicians position for making an accurate diagnosis. To ensure the interpretation is based on reliable information and takes place in the proper context, a new and sensitive index for assessing the quality of the ECG has been developed. New techniques to capture, for the first time in machine form, the clinical expertise / guidelines for electronic foetal monitoring have been developed based on fuzzy logic and finite state machines, The software model provides a flexible framework to further develop and optimise rules for ECG pattern analysis. The signal enhancement, QRS detection and pattern recognition of important ECG waveform shapes have had extensive testing and results are presented. Results show that no significant loss of information is incurred as a result of the signal enhancement and feature extraction techniques

    The Practical Realization of Quantum Repeaters: An Exploration

    Get PDF
    This thesis is an exploration of quantum repeaters from a practical point of view. Quantum repeaters are devices which help improve the quantum communication capacity of a lossy bosonic channel (which include photonic channels such as optical fiber) beyond what is possible using a purely lossy channel. The analyses in this thesis involve modeling the experimental imperfections inherent in the various devices which comprise a quantum repeater, then combining these models to calculate various quantities of interest. Two systems are analyzed in this thesis. One is a simple quantum repeater, while the other is a potential building block for more complex quantum repeaters. The simple quantum repeater scheme can be implemented with currently available technology. In it, two parties perform quantum key distribution (QKD) by exchanging photons with two quantum memories placed between them. Its secret key rate ideally scales as the square root of the transmittivity of the optical channel, which is superior to QKD schemes based on direct transmission because key rates for the latter scale at best linearly with transmittivity. Taking into account imperfections in the setup, such as detector efficiency and dark counts, we present parameter regimes in which our protocol outperforms protocols based on direct transmission. We find that implementing our scheme with trapped ions is a promising way to reach the necessary parameter regimes, and that the regimes are easier to reach if the optical channels are very lossy. The creation of entanglement between two quantum memories is an important building block in some quantum repeater schemes. We consider a specific quantum memory consisting of an atom trapped in a cavity. The system allows a CNOT operation to be performed between an atom and a photon. We study three methods for taking advantage of this to entangle two atoms: (1) interacting a coherent pulse with each atom, then performing an entangling measurement on the pulses; (2) interacting a single coherent pulse with each atom sequentially; (3) emitting an entangled photon from one atom and interacting it with the other atom. The success probability of each method is compared, as well as the quality of the entangled states produced by each one, taking into account imperfections which appear in a specific experimental implementation of such memories. We find that there is a tradeoff between success probability and entangled state quality when coherent states are used, and that method 3 provides higher-quality entangled states than is possible with the other two methods

    Design and Implementation of Complexity Reduced Digital Signal Processors for Low Power Biomedical Applications

    Get PDF
    Wearable health monitoring systems can provide remote care with supervised, inde-pendent living which are capable of signal sensing, acquisition, local processing and transmission. A generic biopotential signal (such as Electrocardiogram (ECG), and Electroencephalogram (EEG)) processing platform consists of four main functional components. The signals acquired by the electrodes are amplified and preconditioned by the (1) Analog-Front-End (AFE) which are then digitized via the (2) Analog-to-Digital Converter (ADC) for further processing. The local digital signal processing is usually handled by a custom designed (3) Digital Signal Processor (DSP) which is responsible for either anyone or combination of signal processing algorithms such as noise detection, noise/artefact removal, feature extraction, classification and compres-sion. The digitally processed data is then transmitted via the (4) transmitter which is renown as the most power hungry block in the complete platform. All the afore-mentioned components of the wearable systems are required to be designed and fitted into an integrated system where the area and the power requirements are stringent. Therefore, hardware complexity and power dissipation of each functional component are crucial aspects while designing and implementing a wearable monitoring platform. The work undertaken focuses on reducing the hardware complexity of a biosignal DSP and presents low hardware complexity solutions that can be employed in the aforemen-tioned wearable platforms. A typical state-of-the-art system utilizes Sigma Delta (Σ∆) ADCs incorporating a Σ∆ modulator and a decimation filter whereas the state-of-the-art decimation filters employ linear phase Finite-Impulse-Response (FIR) filters with high orders that in-crease the hardware complexity [1–5]. In this thesis, the novel use of minimum phase Infinite-Impulse-Response (IIR) decimators is proposed where the hardware complexity is massively reduced compared to the conventional FIR decimators. In addition, the non-linear phase effects of these filters are also investigated since phase non-linearity may distort the time domain representation of the signal being filtered which is un-desirable effect for biopotential signals especially when the fiducial characteristics carry diagnostic importance. In the case of ECG monitoring systems the effect of the IIR filter phase non-linearity is minimal which does not affect the diagnostic accuracy of the signals. The work undertaken also proposes two methods for reducing the hardware complexity of the popular biosignal processing tool, Discrete Wavelet Transform (DWT). General purpose multipliers are known to be hardware and power hungry in terms of the number of addition operations or their underlying building blocks like full adders or half adders required. Higher number of adders leads to an increase in the power consumption which is directly proportional to the clock frequency, supply voltage, switching activity and the resources utilized. A typical Field-Programmable-Gate-Array’s (FPGA) resources are Look-up Tables (LUTs) whereas a custom Digital Signal Processor’s (DSP) are gate-level cells of standard cell libraries that are used to build adders [6]. One of the proposed methods is the replacement of the hardware and power hungry general pur-pose multipliers and the coefficient memories with reconfigurable multiplier blocks that are composed of simple shift-add networks and multiplexers. This method substantially reduces the resource utilization as well as the power consumption of the system. The second proposed method is the design and implementation of the DWT filter banks using IIR filters which employ less number of arithmetic operations compared to the state-of-the-art FIR wavelets. This reduces the hardware complexity of the analysis filter bank of the DWT and can be employed in applications where the reconstruction is not required. However, the synthesis filter bank for the IIR wavelet transform has a higher computational complexity compared to the conventional FIR wavelet synthesis filter banks since re-indexing of the filtered data sequence is required that can only be achieved via the use of extra registers. Therefore, this led to the proposal of a novel design which replaces the complex IIR based synthesis filter banks with FIR fil-ters which are the approximations of the associated IIR filters. Finally, a comparative study is presented where the hybrid IIR/FIR and FIR/FIR wavelet filter banks are de-ployed in a typical noise reduction scenario using the wavelet thresholding techniques. It is concluded that the proposed hybrid IIR/FIR wavelet filter banks provide better denoising performance, reduced computational complexity and power consumption in comparison to their IIR/IIR and FIR/FIR counterparts

    Early Classification of Pathological Heartbeats on Wireless Body Sensor Nodes

    Get PDF
    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections

    SPARSE RECOVERY BY NONCONVEX LIPSHITZIAN MAPPINGS

    Get PDF
    In recent years, the sparsity concept has attracted considerable attention in areas of applied mathematics and computer science, especially in signal and image processing fields. The general framework of sparse representation is now a mature concept with solid basis in relevant mathematical fields, such as probability, geometry of Banach spaces, harmonic analysis, theory of computability, and information-based complexity. Together with theoretical and practical advancements, also several numeric methods and algorithmic techniques have been developed in order to capture the complexity and the wide scope that the theory suggests. Sparse recovery relays over the fact that many signals can be represented in a sparse way, using only few nonzero coefficients in a suitable basis or overcomplete dictionary. Unfortunately, this problem, also called `0-norm minimization, is not only NP-hard, but also hard to approximate within an exponential factor of the optimal solution. Nevertheless, many heuristics for the problem has been obtained and proposed for many applications. This thesis provides new regularization methods for the sparse representation problem with application to face recognition and ECG signal compression. The proposed methods are based on fixed-point iteration scheme which combines nonconvex Lipschitzian-type mappings with canonical orthogonal projectors. The first are aimed at uniformly enhancing the sparseness level by shrinking effects, the latter to project back into the feasible space of solutions. In the second part of this thesis we study two applications in which sparseness has been successfully applied in recent areas of the signal and image processing: the face recognition problem and the ECG signal compression problem

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Engineering studies of vectorcardiographs in blood pressure measuring systems, appendix 3

    Get PDF
    The following subjects were covered: (1) ASM80 manual, (2) signal preprocessing as an aid to on-line EKG analysis, and (3) high speed evaluation of magnetic tape recordings of electrocardiograms. A description of the ASM80 symbolic assembly program for the INTEL 8080 microprocessor and a user's manual were presented. The capability of three redundancy reduction algorithms to produce adequate representations of electrocardiographic data was examined. A hardware device was constructed which carried out zero order interpolation on a signal. Examination of the zero order interpolators reconstructed signal indicated that this representation was adequate for analysis of rhythm. A system to analyze magnetic tapes of electrocardiograms recorded over 24 hour intervals was designed. The recordings are sampled 200 times per second using a Nova computer and a special interface system. This system was tested on several recordings of clinical data, containing over 75 premature ventricular contractions, each one of which was flagged

    Detecció automàtica i robusta de Bursts en EEG de nounats amb HIE. Enfocament tensorial

    Get PDF
    [ANGLÈS] Hypoxic-Ischemic Encephalopathy (HIE) is an important cause of brain injury in the newborn, and can result in long-term devastating consequences. Burst-suppression pattern is one of several indicators of severe pathology in the EEG signal that may occur after brain damage caused by e.g. asphyxia around the time of birth. The goal of this thesis is to design a robust method to detect burst patterns automatically regardless of the physiologic and extra-physiologic artifacts that may occur at any time. At first, a pre-detector has been designed to obtain potential burst candidates from different patients. Then, a post-classification has been implemented, applying high dimensional feature extraction methods, to get the real burst patterns from these patients with a high sensitivity.[CASTELLÀ] La Hipoxia-Isquemia Encefálica (HIE) es una causa importante de lesión cerebral en los recién nacidos, pudiendo acarrear devastadoras consecuencias a largo plazo. El patrón Burst-Suppression es uno de los indicadores dados en patologías severas en señales EEG los cuales ocurren después de una lesión cerebral causada, por ejemplo, por una asfixia poco después del nacimiento. El objetivo de esta tésis es diseñar un método robusto que detecte automáticamente patrones Burst, prescindiendo de los artefactos fisiológicos y extra-fisiológicos que puedan aparecer en cualquier momento. Primeramente, se ha diseñado un pre-detector para obtener los candidatos potenciales a Burst provenientes de diferentes pacientes. Seguidamente, se ha implementado una post-clasificación, aplicando métodos de extracción de características para altas dimensiones, para obtener patrones reales de Burst con una alta sensitividad.[CATALÀ] La Hipòxia-Isquèmia Encefàlica (HIE) és una causa important de lesió cerebral en nounats, que poden comportar devastadores conseqüències a llarg termini. El patró Burst-Suppression és un dels indicadors donats en patologies severes en els senyals EEG els quals ocorren després d'una lesió cerebral causada, per exemple, per una asfixia poc després del naixement. L'objectiu d'aquesta tesis és dissenyar un mètode robust que detecti automàticament patrons Burst, prescindint dels artefactes fisiològics i extra-fisiològics que poden aparèixer en qualsevol moment. Primerament, s'ha dissenyat un pre-detector per obtenir els candidats potencials a Burst provinents de diferents pacients. Seguidament, s'ha implementat una post-classificació, aplicant mètodes d'extracció de característiques per a altes dimensions, per tal d'obtenir patrons reals de Burst amb una alta sensitivitat
    corecore