109 research outputs found

    Mathematical modelling of end-to-end packet delay in multi-hop wireless networks and their applications to qos provisioning

    Get PDF
    This thesis addresses the mathematical modelling of end-to-end packet delay for Quality of Service (QoS) provisioning in multi-hop wireless networks. The multi-hop wireless technology increases capacity and coverage in a cost-effective way and it has been standardised in the Fourth-Generation (4G) standards. The effective capacity model approximates end-to-end delay performances, including Complementary Cumulative Density Function (CCDF) of delay, average delay and jitter. This model is first tested using Internet traffic trace from a real gigabit Ethernet gateway. The effective capacity model is developed based on single-hop and continuous-time communication systems but a multi-hop wireless system is better described to be multi-hop and time-slotted. The thesis extends the effective capacity model by taking multi-hop and time-slotted concepts into account, resulting in two new mathematical models: the multi-hop effective capacity model for multi-hop networks and the mixed continuous/discrete-time effective capacity model for time-slotted networks. Two scenarios are considered to validate these two effective capacity-based models based on ideal wireless communications (the physical-layer instantaneous transmission rate is the Shannon channel capacity): 1) packets traverse multiple wireless network devices and 2) packets are transmitted to or received from a wireless network device every Transmission Time Interval (TTI). The results from these two scenarios consistently show that the new mathematical models developed in the thesis characterise end-to-end delay performances accurately. Accurate and efficient estimators for end-to-end packet delay play a key role in QoS provisioning in modern communication systems. The estimators from the new effective capacity-based models are directly tested in two systems, faithfully created using realistic simulation techniques: 1) the IEEE 802.16-2004 networks and 2) wireless tele-ultrasonography medical systems. The results show that the estimation and simulation results are in good agreement in terms of end-to-end delay performances

    Design and evaluation of echocardiograms codification and transmission for Teleradiology systems

    Get PDF
    Las enfermedades cardiovasculares son la mayor causa de muerte en el mundo. Aunque la mayoría de muertes por cardiopatías se puede evitar, si las medidas preventivas no son las adecuadas el paciente puede fallecer. Es por esto, que el seguimiento y diagnóstico de pacientes con cardiopatías es muy importante. Numerosos son las pruebas médicas para el diagnostico y seguimiento de enfermedades cardiovasculares, siendo los ecocardiogramas una de las técnicas más ampliamente utilizada. Un ecocardiograma consiste en la adquisición de imágenes del corazón mediante ultrasonidos. Presenta varias ventajas con respecto otras pruebas de imagen: no es invasiva, no produce radiación ionizante y es barata. Por otra parte, los sistemas de telemedicina han crecido rápidamente ya que ofrecen beneficios de acceso a los servicios médicos, una reducción del coste y una mejora de la calidad de los servicios. La telemedicina proporciona servicios médicos a distancia. Estos servicios son de especial ayuda en casos de emergencia médica y para áreas aisladas donde los hospitales y centros de salud están alejados. Los sistemas de tele-cardiología pueden ser clasificados de acuerdo al tipo de pruebas. En esta Tesis nos hemos centrado en los sistemas de tele-ecocardiografia, ya que los ecocardiogramas son ampliamente usados y presentan el mayor reto al ser la prueba médica con mayor flujo de datos. Los mayores retos en los sistemas de tele-ecocardiografia son la compresión y la transmisión garantizando que el mismo diagnóstico es posible tanto en el ecocardiograma original como en el reproducido tras la compresión y transmisión. Los ecocardiogramas deben ser comprimidos tanto para su almacenamiento como para su transmisión ya que estos presentan un enorme flujo de datos que desbordaría el espacio de almacenamiento y no se podría transmitir eficientemente por las redes actuales. Sin embargo, la compresión produce pérdidas que pueden llevar a un diagnostico erróneo de los ecocardiogramas comprimidos. En el caso de que las pruebas ecocardiograficas quieran ser guardadas, una compresión clínica puede ser aplicada previa al almacenamiento. Esta compresión clínica consiste en guardar las partes del ecocardiograma que son importantes para el diagnóstico, es decir, ciertas imágenes y pequeños vídeos del corazón en movimiento que contienen de 1 a 3 ciclos cardiacos. Esta compresión clínica no puede ser aplicada en el caso de transmisión en tiempo real, ya que es el cardiólogo especialista quien debe realizar la compresión clínica y éste se encuentra en recepción, visualizando el echocardiograma transmitido. En cuanto a la transmisión, las redes sin cables presentan un mayor reto que las redes cableadas. Las redes sin cables tienen un ancho de banda limitado, son propensas a errores y son variantes en tiempo lo que puede resultar problemático cuando el ecocardiograma quiere ser transmitido en tiempo real. Además, las redes sin cables han experimentado un gran desarrollo gracias a que permiten un mejor acceso y movilidad, por lo que pueden ofrecer un mayor servicio que las redes cableadas. Dos tipos de sistemas se pueden distinguir acorde a los retos que presenta cada uno de ellos: los sistemas de almacenamiento y reenvió y los sistemas de tiempo real. Los sistemas de almacenamiento y reenvió consisten en la adquisición, almacenamiento y el posterior envió del ecocardiograma sin requerimientos temporales. Una compresión clínica puede ser llevada a cabo previa al almacenamiento. Además de la compresión clínica, una compresión con pérdidas es recomendada para reducir el espacio de almacenamiento y el tiempo de envío, pero sin perder l ainformación diagnóstica de la prueba. En cuanto a la transmisión, al no haber requerimientos temporales, la transmisión no presenta ninguna dificultad. Cualquier protocolo de transmisión fiable puede ser usado para no perder calidad en la imagen debido a la transmisión. Por lo tanto, para estos sistemas sólo nos hemos centrado en la codificación de los ecocardiogramas. Los sistemas de tiempo real consisten en la transmisión del ecocardiograma al mismo tiempo que éste es adquirido. Dado que el envío de video clínico es una de las aplicaciones con mayor demanda de ancho de banda, la compresión para la transmisión es requerida, pero manteniendo la calidad diagnóstica de la imagen. La transmisión en canales sin cables puede ser afectada por errores que distorsionan la calidad del ecocardiograma reconstruido en recepción. Por lo tanto, métodos de control de errores son requeridos para minimizar los errores de transmisión y el retardo introducido. Sin embargo, aunque el ecocardiograma sea visualizado con errores debido a la transmisión, esto no implica que el diagnóstico no sea posible. Dados los retos previamente descritos, las siguientes soluciones para la evaluación clínica, compresión y transmisión han sido propuestas: - Para garantizar que el ecocardiograma es visualizado sin perder información diagnóstica 2 tests han sido diseñados. El primer test define recomendaciones para la compresión de los ecocardiogramas. Consiste en dos fases para un ahorro en el tiempo de realización, pero sin perder por ello exactitud en el proceso de evaluación. Gracias a este test el ecocardiograma puede ser comprimido al máximo sin perder calidad diagnóstica y utilizando así más eficientemente los recursos. El segundo test define recomendaciones para la visualización del ecocardiograma. Este test define rangos de tiempo en los que el ecocardiograma puede ser visualizado con inferior calidad a la establecida en el primer test. Gracias a este test se puede saber si el ecocardiograma es visualizado sin pérdida de calidad diagnóstica cuando se introducen errores en la visualización, sin la necesidad de realizar una evaluación para cada video transmitido o diferentes condiciones de canal. Además, esta metodología puede ser aplicada para la evaluación de otras técnicas de diagnóstico por imagen. - Para la compresión de ecocardiogramas dos métodos de compresión han sido diseñados, uno para el almacenamiento y otro para la transmisión. Diferentes propuestas son diseñadas, ya que los ecocardiogramas para los dos propósitos tienen características diferentes. Para ambos propósitos un método de compresión en la que las facilidades que incorporan los dispositivos de segmentar la imagen y en la que las características de visualización de los ecocardiogramas han sido tenidas en cuenta ha sido diseñado. Para la compresión del ecocardiograma con el propósito de almacenarlo un formato de almacenamiento fácilmente integrable con DICOM basado en regiones y en el que el tipo de datos y la importancia clínica de cada región es tenido en cuenta ha sido diseñado. DICOM es el formato para el almacenamiento y transmisión de imágenes más ampliamente utilizado actualmente. El formato de compresión propuesto supone un ahorra de hasta el 75 % del espacio de almacenamiento con respecto a la compresión con JPEG 2000, actualmente soportado por DICOM, sin perder calidad diagnostica de la imagen. Los ratios de compresión para el formato propuesto dependen de la distribución de la imagen, pero para una base de datos de 105 ecocardiogramas correspondientes a 4 ecógrafos los ratios obtenidos están comprendidos entre 19 y 41. Para la compresión del ecocardiograma con el propósito de la transmisión en tiempo real un método de compresión basado en regiones en el que el tipo de dato y el modo de visualización han sido tenidos en cuenta se ha diseñado. Dos modos de visualización son distinguidos para la compresión de la región con mayor importancia clínica (ultrasonido), los modos de barrido y los modos 2-D. La evaluación clínica diseñada para las recomendaciones de compresión fue llevada a cabo por 3 cardiologos, 9 ecocardiogramas correspondientes a diferentes pacientes y 3 diferentes ecógrafos. Los ratios de transmisión recomendados fueron de 200 kbps para los modos 2-D y de 40 kbps para los modos de barrido. Si se comparan estos resultados con previas soluciones en la literatura un ahorro mínimo de entre 5 % y el 78 % es obtenido dependiendo del modo. - Para la transmisión en tiempo real de ecocardiogramas un protocolo extremo a extremo basada en el método de compresión por regiones ha sido diseñado. Este protocolo llamado ETP de las siglas en inglés Echocardiogram Transmssion Protocol está diseñado para la compresión y transmisión de las regiones por separado, pudiendo así ofrecer diferentes ratios de compresión y protección de errores para las diferentes regiones de acuerdo a su importancia diagnostica. Por lo tanto, con ETP el ratio de transmisión mínimo recomendado para el método de compresión propuesto puede ser utilizado, usando así eficientemente el ancho de banda y siendo menos sensible a los errores introducidos por la red. ETP puede ser usado en cualquier red, sin embargo, en el caso de que la red introduzca errores se ha diseñado un método de corrección de errores llamado SECM, de las siglas en inglés State Error Control Method. SECM se adapta a las condiciones de canal usando más protección cuando las condiciones empeoran y usando así el ancho de banda eficientemente. Además, la evaluación clínica diseñada para las recomendaciones de visualización ha sido llevada a cabo con la base de datos de la evaluación previa. De esta forma se puede saber si el ecocardiograma es visualizado sin pérdida diagnostica aunque se produzcan errores de transmisión. En esta tesis, por lo tanto, se ha ofrecido una solución para la transmisión en tiempo real y el almacenamiento de ecocardiogramas preservando la información diagnóstica y usando eficientemente los recursos (disco de almacenamiento y ratio de transmisión). Especial soporte se da para la transmisión en redes sin cables, dando soluciones a las limitaciones que estas introducen. Además, las soluciones propuestas han sido probadas y comparadas con otras técnicas con una red de acceso móvil WiMAX, demostrando que el ancho de banda es eficientemente utilizado y que el ecocardiograma es correctamente visualizado de acuerdo con las recomendaciones de visualización dadas por la evaluación clínica

    Electrodes' Configuration Influences the Agreement Between Surface EMG and B-Mode Ultrasound Detection of Motor Unit Fasciculation

    Get PDF
    Muscle fasciculations, resulting from the spontaneous activation of motor neurons, may be associated with neurological disorders, and are often assessed with intramuscular electromyography (EMG). Recently, however, both ultrasound (US) imaging and multichannel surface EMG have been shown to be more sensitive to fasciculations. In this study we combined these two techniques to compare their detection sensitivity to fasciculations occurring in different muscle regions and to investigate the effect of EMG electrodes' configuration on their agreement. Monopolar surface EMGs were collected from medial gastrocnemius and soleus with an array of 32 electrodes (10 mm Inter-Electrode Distance, IED) simultaneously with b-mode US images detected alongside either proximal, central or distal electrodes groups. Fasciculation potentials (FP) were identified from single differential EMGs with 10 mm (SD1), 20 mm (SD2) and 30 mm (SD3) IEDs, and fasciculation events (FE) from US image sequences. The number, location, and size of FEs and FPs in 10 healthy participants were analyzed. Overall, the two techniques showed similar sensitivities to muscle fasciculations. US was equally sensitive to FE occurring in the proximal and distal calf regions, while the number of FP revealed by EMG increased significantly with the IED and was larger distally, where the depth of FE decreased. The agreement between the two techniques was relatively low, with a percentage of fasciculation classified as common ranging from 22% for the smallest IED to 68% for the largest IED. The relevant number of events uniquely detected by the two techniques is discussed in terms of different spatial sensitivities of EMG and US, which suggest that a combination of US-EMG is likely to maximise the sensitivity to muscle fasciculations

    Assessing patient and caregiver intent to use mobile device videoconferencing for remote mechanically-ventilated patient management

    Get PDF
    The Michigan Medicine adult Assisted Ventilation Clinic (AVC) supports patients with neuromuscular disorders and spinal cord injuries and their caregivers at home, helping them avoid expensive emergency department visits, hospitalization, and unnecessary or excessive treatments. Mobile device videoconferencing provides an effective capability for remote mechanically-ventilated patient management but must rely upon an unknown infrastructure comprising patient and caregiver mobile device ownership, connectivity, and experience—and intent to use the service if provided. The purpose of this study was to measure the extent of this infrastructure and the perceived ease of use, perceived usefulness, and intent to use this mobile device capability using a questionnaire based on the technology acceptance model (TAM). Of 188 patients and caregivers asked, 153 (n = 153) respondents completed a questionnaire comprised of 14 demographic and 24 Likert-type questions. Inferential results indicated a significant correlation between perceived ease of use (PEU) and perceived usefulness (PU) of mobile devices in remote care and their intent to use them (sig. \u3c .001). Also, mobile device own/access significantly correlated with PEU and PU (p = .003 & .004, respectively), but not intent to use. No single demographic variable (age, distance to AVC, diagnoses, mobile device experience, tracheostomy, etc.) significantly correlated with intent to use. Descriptive results indicated a significant patient/caregiver provided infrastructure: 96% have cellular/WiFi/Internet access, 91% own or have access to mobile devices, 77% have downloaded apps, 68% have used videoconferencing, and 80% own between two and five ICT devices

    Joint source and channel coding

    Get PDF

    Toward a Discourse Community for Telemedicine: A Domain Analytic View of Published Scholarship

    Get PDF
    In the past 20 years, the use of telemedicine has increased, with telemedicine programs increasingly being conducted through the Internet and ISDN technologies. The purpose of this dissertation is to examine the discourse community of telemedicine. This study examined the published literature on telemedicine as it pertains to quality of care, defined as correct diagnosis and treatment (Bynum and Irwin 2011). Content analysis and bibliometrics were conducted on the scholarly discourse, and the most prominent authors and journals were documented to paint and depict the epistemological map of the discourse community of telemedicine. A taxonomy based on grounded research of scholarly literature was developed and validated against other existing taxonomies. Telemedicine has been found to increase the quality and access of health care and decrease health care costs (Heinzelmann, Williams, Lugn and Kvedar 2005 and Wootton and Craig 1999). Patients in rural areas where there is no specialist or patients who find it difficult to get to a doctor’s office benefit from telemedicine. Little research thus far has examined scholarly journals in order to aggregate and analyze the prevalent issues in the discourse community of telemedicine. The purpose of this dissertation is to empiricallydocument the prominent topics and issues in telemedicine by examining the related published scholarly discourse of telemedicine during a snapshot in time. This study contributes to the field of telemedicine by offering a comprehensive taxonomy of the leading authors and journals in telemedicine, and informs clinicians, librarians and other stakeholders, including those who may want to implement telemedicine in their institution, about issues telemedicine

    Optimization and validation of a new 3D-US imaging robot to detect, localize and quantify lower limb arterial stenoses

    Get PDF
    L’athérosclérose est une maladie qui cause, par l’accumulation de plaques lipidiques, le durcissement de la paroi des artères et le rétrécissement de la lumière. Ces lésions sont généralement localisées sur les segments artériels coronariens, carotidiens, aortiques, rénaux, digestifs et périphériques. En ce qui concerne l’atteinte périphérique, celle des membres inférieurs est particulièrement fréquente. En effet, la sévérité de ces lésions artérielles est souvent évaluée par le degré d’une sténose (réduction >50 % du diamètre de la lumière) en angiographie, imagerie par résonnance magnétique (IRM), tomodensitométrie ou échographie. Cependant, pour planifier une intervention chirurgicale, une représentation géométrique artérielle 3D est notamment préférable. Les méthodes d’imagerie par coupe (IRM et tomodensitométrie) sont très performantes pour générer une imagerie tridimensionnelle de bonne qualité mais leurs utilisations sont dispendieuses et invasives pour les patients. L’échographie 3D peut constituer une avenue très prometteuse en imagerie pour la localisation et la quantification des sténoses. Cette modalité d’imagerie offre des avantages distincts tels la commodité, des coûts peu élevés pour un diagnostic non invasif (sans irradiation ni agent de contraste néphrotoxique) et aussi l’option d’analyse en Doppler pour quantifier le flux sanguin. Étant donné que les robots médicaux ont déjà été utilisés avec succès en chirurgie et en orthopédie, notre équipe a conçu un nouveau système robotique d’échographie 3D pour détecter et quantifier les sténoses des membres inférieurs. Avec cette nouvelle technologie, un radiologue fait l’apprentissage manuel au robot d’un balayage échographique du vaisseau concerné. Par la suite, le robot répète à très haute précision la trajectoire apprise, contrôle simultanément le processus d’acquisition d’images échographiques à un pas d’échantillonnage constant et conserve de façon sécuritaire la force appliquée par la sonde sur la peau du patient. Par conséquent, la reconstruction d’une géométrie artérielle 3D des membres inférieurs à partir de ce système pourrait permettre une localisation et une quantification des sténoses à très grande fiabilité. L’objectif de ce projet de recherche consistait donc à valider et optimiser ce système robotisé d’imagerie échographique 3D. La fiabilité d’une géométrie reconstruite en 3D à partir d’un système référentiel robotique dépend beaucoup de la précision du positionnement et de la procédure de calibration. De ce fait, la précision pour le positionnement du bras robotique fut évaluée à travers son espace de travail avec un fantôme spécialement conçu pour simuler la configuration des artères des membres inférieurs (article 1 - chapitre 3). De plus, un fantôme de fils croisés en forme de Z a été conçu pour assurer une calibration précise du système robotique (article 2 - chapitre 4). Ces méthodes optimales ont été utilisées pour valider le système pour l’application clinique et trouver la transformation qui convertit les coordonnées de l’image échographique 2D dans le référentiel cartésien du bras robotisé. À partir de ces résultats, tout objet balayé par le système robotique peut être caractérisé pour une reconstruction 3D adéquate. Des fantômes vasculaires compatibles avec plusieurs modalités d’imagerie ont été utilisés pour simuler différentes représentations artérielles des membres inférieurs (article 2 - chapitre 4, article 3 - chapitre 5). La validation des géométries reconstruites a été effectuée à l`aide d`analyses comparatives. La précision pour localiser et quantifier les sténoses avec ce système robotisé d’imagerie échographique 3D a aussi été déterminée. Ces évaluations ont été réalisées in vivo pour percevoir le potentiel de l’utilisation d’un tel système en clinique (article 3- chapitre 5).Atherosclerosis is a disease caused by the accumulation of lipid deposits inducing the remodeling and hardening of the vessel wall, which leads to a progressive narrowing of arteries. These lesions are generally located on the coronary, carotid, aortic, renal, digestive and peripheral arteries. With regards to peripheral vessels, lower limb arteries are frequently affected. The severity of arterial lesions are evaluated by the stenosis degree (reduction > 50.0 % of the lumen diameter) using angiography, magnetic resonance angiography (MRA), computed tomography (CT) and ultrasound (US). However, to plan a surgical therapeutic intervention, a 3D arterial geometric representation is notably preferable. Imaging methods such as MRA and CT are very efficient to generate a three-dimensional imaging of good quality even though their use is expensive and invasive for patients. 3D-ultrasound can be perceived as a promising avenue in imaging for the location and the quantification of stenoses. This non invasive, non allergic (i.e, nephrotoxic contrast agent) and non-radioactive imaging modality offers distinct advantages in convenience, low cost and also multiple diagnostic options to quantify blood flow in Doppler. Since medical robots already have been used with success in surgery and orthopedics, our team has conceived a new medical 3D-US robotic imaging system to localize and quantify arterial stenoses in lower limb vessels. With this new technology, a clinician manually teaches the robotic arm the scanning path. Then, the robotic arm repeats with high precision the taught trajectory and controls simultaneously the ultrasound image acquisition process at even sampling and preserves safely the force applied by the US probe. Consequently, the reconstruction of a lower limb arterial geometry in 3D with this system could allow the location and quantification of stenoses with high accuracy. The objective of this research project consisted in validating and optimizing this 3D-ultrasound imaging robotic system. The reliability of a 3D reconstructed geometry obtained with 2D-US images captured with a robotic system depends considerably on the positioning accuracy and the calibration procedure. Thus, the positioning accuracy of the robotic arm was evaluated in the workspace with a lower limb-mimicking phantom design (article 1 - chapter 3). In addition, a Z-phantom was designed to assure a precise calibration of the robotic system. These optimal methods were used to validate the system for the clinical application and to find the transformation which converts image coordinates of a 2D-ultrasound image into the robotic arm referential. From these results, all objects scanned by the robotic system can be adequately reconstructed in 3D. Multimodal imaging vascular phantoms of lower limb arteries were used to evaluate the accuracy of the 3D representations (article 2 - chapter 4, article 3 - chapter 5). The validation of the reconstructed geometry with this system was performed by comparing surface points with the manufacturing vascular phantom file surface points. The accuracy to localize and quantify stenoses with the 3D-ultrasound robotic imaging system was also determined. These same evaluations were analyzed in vivo to perceive the feasibility of the study

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Analysing and quantifying visual experience in medical imaging

    Get PDF
    Healthcare professionals increasingly view medical images and videos in a variety of environments. The perception and interpretation of medical visual information across all specialties, career stages, and practice settings are critical to patient care and safety. However, medical images and videos are not self-explanatory and thus need to be interpreted by humans, who are prone to errors caused by the inherent limitations of the human visual system. It is essential to understand how medical experts perceive visual content, and use this knowledge to develop new solutions to improve clinical practice. Progress has been made in the literature towards such understanding, however studies remain limited. This thesis investigates two aspects of human visual experience in medical imaging, i.e., visual quality assessment and visual attention. Visual quality assessment is important as diverse visual signal distortion may arise in medical imaging and affect the perceptual quality of visual content, and therefore potentially impact the diagnosis accuracy. We adapted existing qualitative and quantitative methods to evaluate the quality of distorted medical videos. We also analysed the impact of medical specialty on visual perception and found significant differences between specialty groups, e.g., sonographers were in general more bothered by visual distortions than radiologists. Visual attention has been studied in medical imaging using eye-tracking technology. In this thesis, we firstly investigated gaze allocation with radiologists analysing two-view mammograms and secondly assessed the impact of expertise and experience on gaze behaviour. We also evaluated to what extent state-of-the-art visual attention models can predict radiologists’ gaze behaviour and showed the limitations of existing models. This thesis provides new experimental designs and statistical processes to evaluate the perception of medical images and videos, which can be used to optimise the visual experience of image readers in clinical practice
    corecore