7 research outputs found

    Modeling andsimulationofspeedselectiononleftventricular assist devices

    Get PDF
    The control problem for LVADs is to set pump speed such that cardiac output and pressure perfusion are within acceptable physiological ranges. However, current technology of LVADs cannot provide for a closed-loop control scheme that can make adjustments based on the patient\u27s level of activity. In this context, the SensorART Speed Selection Module (SSM) integrates various hardware and software components in order to improve the quality of the patients\u27 treatment and the workflow of the specialists. It enables specialists to better understand the patient-device interactions, and improve their knowledge. The SensorART SSM includes two tools of the Specialist Decision Support System (SDSS); namely the Suction Detection Tool and the Speed Selection Tool. A VAD Heart Simulation Platform (VHSP) is also part of the system. The VHSP enables specialists to simulate the behavior of a patient?s circulatory system, using different LVAD types and functional parameters. The SDSS is a web-based application that offers specialists with a plethora of tools for monitoring, designing the best therapy plan, analyzing data, extracting new knowledge and making informative decisions. In this paper, two of these tools, the Suction Detection Tool and Speed Selection Tool are presented. The former allows the analysis of the simulations sessions from the VHSP and the identification of issues related to suction phenomenon with high accuracy 93%. The latter provides the specialists with a powerful support in their attempt to effectively plan the treatment strategy. It allows them to draw conclusions about the most appropriate pump speed settings. Preliminary assessments connecting the Suction Detection Tool to the VHSP are presented in this paper

    Analysis of biological tissues using biosignals: case of fetal electrocardiogram

    No full text
    During pregnancy, the flow of oxygen and nutrients to the fetus and the removal of carbon dioxide and other waste gases from it is achieved through the placenta. Adequate blood flow to and from the placenta, and in both the maternal and fetal circulations, is necessary in order for the baby to receive enough oxygen and for it to be able to expel carbon dioxide and other waste gases. Any alteration in placental function can lead to decreases in the delivery of oxygen to the baby, a condition known as fetal hypoxia. The motivation for monitoring the fetus during pregnancy is to recognize pathologic conditions, typically decreased oxygen saturation, accompanied with sufficient warning to enable intervention by the clinician before irreversible changes take place. Scientists are working for the last decades to develop new technologies for the continuous intrapartum fetal monitoring. Early approaches are used for monitoring of the fetal heart rate (fHR) and the mother's uterine contractions (cardiotocography- CTG). In CTG, fHR is monitored using an ultrasound transducer strapped to the mother's abdomen, while uterine activity is recorded from an external toco sensor. Monitoring using CTG mainly identifies fetuses affected by intrapartum asphyxia, resulting in early intervention and a reduction in cerebral palsy. Unfortunately, a large number of fetuses affect fHR without being asphyxiated. Thus, electronic fHR monitoring based on CTG provides with poor specificity in detecting fetal hypoxia and cannot provide all information which is required. This has created an increased rate of intervention and uncertainty about the clinical value of CTG. Doppler ultrasound is a widely used technique by medical doctors to monitor fHR. However, except some specific disadvantages (e.g. need for experienced personnel, specialized equipment and use in hospital environments), the major limitation of the Doppler ultrasound is its sensitivity to any movement. The movement of the mother can result in Doppler-shifted reflected waves, which are stronger than the cardiac signal. Thus this technique is not suitable for long-term monitoring of the fHR as it requires the patients to be bed-rested. In addition, there have been a number of publications linking diagnostic ultrasound to an increase in Intrauterine Growth Restriction (IUGR) and the stimulation of endothelial cell growth and the release of adenosine triphosphate (A-tp). However, the effect of ultrasound on the fetus is not completely clear. Important clinical studies support the incorporation of ST waveform analysis into fHR analysis for intrapartum monitoring, with reduction in the rates of neonatal metabolic acidosis as well as neonatal encephalopathy. ST waveform analysis is mainly performed in the fetal ECG (fECG) signal, recorded using a fetal scalp electrode. Repolarisation of myocardial (heart muscle) cells is very sensitive to metabolic dysfunction, and might be reflected in changes of the ST waveform. The changes in fECG associated with fetal hypoxia are either an increase in T-wave, quantified by the ratio of the T-wave to the QRS amplitude (T/QRS ratio), or biphasic ST-pattern: the combination of these features with fHR pattern analysis and additional clinical information can lead to accurate identification of hypoxia cases and to the avoidance of unnecessary interventions. However, application of a fetal scalp electrode has the risk of maternal to fetal infection, which contraindicates any invasive monitoring technique. In addition, as an invasive technique, this type of fetal monitoring is less acceptable than external monitoring from pregnant women and midwives. Also, as the system responds primarily to changes in the ST segment, if it is applied when such changes have already occurred, there is the possibility of a false-negative result (inappropriate reassurance about a fetal condition). Thus, automated assessment of fetal cardiac health status based on non-invasive monitoring techniques is an important issue which must be investigated

    A Traffic-Load-Based Algorithm for Wireless Sensor Networks’ Lifetime Extension

    No full text
    It has been shown in the literature that the lifetime of a wireless sensor network is heavily connected to the number of transmissions that network nodes have to undertake. Considering this finding, along with the effects of the energy hole problem where nodes closer to the sink node transmit more than the more distant ones, a node close to the sink node will be the one that transmits the most, while it will also be the node that will deplete its battery first. Taking into consideration that the failure of a single network node to operate, due to its battery being discharged, can lead to a network stopping its operation, the most energy-consuming node in the network will also be the one that will be responsible for the network’s termination. In this sense, the most energy-consuming node’s energy consumption optimization is the main case in this paper. More specifically, in this work, it is firstly shown that the energy consumption of a wireless sensor network is closely related to each network node’s traffic load, that is the transmissions of the packets that are created or forwarded by a node. The minimization of the most energy-consuming node’s energy consumption was studied here, while the implementation of a traffic-load-based algorithm is also proposed. Under the proposed algorithm, given a simple shortest path approach that assigns a parent (i.e., the next hop towards the sink node) in each network node and the knowledge it provides regarding the distance (in hops in this paper’s case) of network nodes from the sink node, the proposed algorithm exploits the shortest path’s results in order to discover, for all network nodes, neighbors that are of the same distance (from the sink node) with the initially assigned parent. Then, if such neighbors exist, all these neighbors are equally burdened with the parenting role. As a result, the traffic load is shared by all of them. To evaluate the proposed algorithm, simulation results are provided, showing that the goals set were achieved; thus, the network lifetime was prolonged. In addition, it is shown that under the algorithm, a fairer distribution of the traffic load takes place

    Characterization of an X-ray Source Generated by a Portable Low-Current X-Pinch

    No full text
    An X-pinch scheme of a low-current generator (45 kA, 50 ns rise time) is characterized as a potential efficient source of soft X-rays. The X-pinch target consists of wires of 5 μm in diameter—made from either tungsten (W) or gold (Au)-plated W—loaded at two angles of 55° and 98° between the crossed wires. Time-resolved soft X-ray emission measurements are performed to provide a secure correlation with the optical probing results. A reconstruction of the actual photodiode current profile procedure was adopted, capable of overcoming the limits of the slow rising and falling times due to the “slow” response of the diodes and the noise. The pure and Au-plated W deliver an average X-ray yield, which depends only on the angle of the crossed wires, and is measured to be ~50 mJ and ~70 mJ for the 98° and 55° crossed wire angles, respectively. An additional experimental setup was developed to characterize the X-pinch as a source of X-rays with energy higher than ~6 keV, via time-integrated measurements. The X-ray emission spectrum was found to have an upper limit at 13 keV for the Au-plated W configuration at 55°. The portable tabletop X-pinch proved to be ideal for use in X-ray radiography applications, such as the detection of interior defects in biological samples

    Evaluating the Window Size’s Role in Automatic EEG Epilepsy Detection

    No full text
    Electroencephalography is one of the most commonly used methods for extracting information about the brain’s condition and can be used for diagnosing epilepsy. The EEG signal’s wave shape contains vital information about the brain’s state, which can be challenging to analyse and interpret by a human observer. Moreover, the characteristic waveforms of epilepsy (sharp waves, spikes) can occur randomly through time. Considering all the above reasons, automatic EEG signal extraction and analysis using computers can significantly impact the successful diagnosis of epilepsy. This research explores the impact of different window sizes on EEG signals’ classification accuracy using four machine learning classifiers. The machine learning methods included a neural network with ten hidden nodes trained using three different training algorithms and the k-nearest neighbours classifier. The neural network training methods included the Broyden–Fletcher–Goldfarb–Shanno algorithm, the multistart method for global optimization problems, and a genetic algorithm. The current research utilized the University of Bonn dataset containing EEG data, divided into epochs having 50% overlap and window lengths ranging from 1 to 24 s. Then, statistical and spectral features were extracted and used to train the above four classifiers. The outcome from the above experiments showed that large window sizes with a length of about 21 s could positively impact the classification accuracy between the compared methods
    corecore