4,995 research outputs found

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    preliminary clinical evaluation of the ASTRA4D algorithm

    Get PDF
    Objectives. To propose and evaluate a four-dimensional (4D) algorithm for joint motion elimination and spatiotemporal noise reduction in low-dose dynamic myocardial computed tomography perfusion (CTP). Methods. Thirty patients with suspected or confirmed coronary artery disease were prospectively included und underwent dynamic contrast-enhanced 320-row CTP. The presented deformable image registration method ASTRA4D identifies a low-dimensional linear model of contrast propagation (by principal component analysis, PCA) of the ex-ante temporally smoothed time-intensity curves (by local polynomial regression). Quantitative (standard deviation, signal-to-noise ratio (SNR), temporal variation, volumetric deformation) and qualitative (motion, contrast, contour sharpness; 1, poor; 5, excellent) measures of CTP quality were assessed for the original and motion-compensated volumes (without and with temporal filtering, PCA/ASTRA4D). Following visual myocardial perfusion deficit detection by two readers, diagnostic accuracy was evaluated using 1.5T magnetic resonance (MR) myocardial perfusion imaging as the reference standard in 15 patients. Results. Registration using ASTRA4D was successful in all 30 patients and resulted in comparison with the benchmark PCA in significantly (p<0.001) reduced noise over time (-83%, 178.5 vs 29.9) and spatially (-34%, 21.4 vs 14.1) as well as improved SNR (+47%, 3.6 vs 5.3) and subjective image quality (motion, contrast, contour sharpness: +1.0, +1.0, +0.5). ASTRA4D resulted in significantly improved per-segment sensitivity of 91% (58/64) and similar specificity of 96% (429/446) compared with PCA (52%, 33/64; 98%, 435/446; p=0.011) and the original sequence (45%, 29/64; 98%, 438/446; p=0.003) in the visual detection of perfusion deficits. Conclusions. The proposed functional approach to temporal denoising and morphologic alignment was shown to improve quality metrics and sensitivity of 4D CTP in the detection of myocardial ischemia.Zielsetzung. Die Entwicklung und Bewertung einer Methode zur simultanen Rauschreduktion und Bewegungskorrektur für niedrig dosierte dynamische CT Myokardperfusion. Methoden. Dreißig prospektiv eingeschlossene Patienten mit vermuteter oder bestätigter koronarer Herzkrankheit wurden einer dynamischen CT Myokardperfusionsuntersuchung unterzogen. Die präsentierte Registrierungsmethode ASTRA4D ermittelt ein niedrigdimensionales Modell des Kontrastmittelflusses (mittels einer Hauptkomponentenanalyse, PCA) der vorab zeitlich geglätteten Intensitätskurven (mittels lokaler polynomialer Regression). Quantitative (Standardabweichung, Signal-Rausch-Verhältnis (SNR), zeitliche Schwankung, räumliche Verformung) und qualitative (Bewegung, Kontrast, Kantenschärfe; 1, schlecht; 5, ausgezeichnet) Kennzahlen der unbearbeiteten und bewegungskorrigierten Perfusionsdatensätze (ohne und mit zeitlicher Glättung PCA/ASTRA4D) wurden ermittelt. Nach visueller Beurteilung von myokardialen Perfusionsdefiziten durch zwei Radiologen wurde die diagnostische Genauigkeit im Verhältnis zu 1.5T Magnetresonanztomographie in 15 Patienten ermittelt. Resultate. Bewegungskorrektur mit ASTRA4D war in allen 30 Patienten erfolgreich und resultierte im Vergleich mit der PCA Methode in signifikant (p<0.001) verringerter zeitlicher Schwankung (-83%, 178.5 gegenüber 29.9) und räumlichem Rauschen (-34%, 21.4 gegenüber 14.1) sowie verbesserter SNR (+47%, 3.6 gegenüber 5.3) und subjektiven Qualitätskriterien (Bewegung, Kontrast, Kantenschärfe: +1.0, +1.0, +0.5). ASTRA4D resultierte in signifikant verbesserter segmentweiser Sensitivität 91% (58/64) und ähnlicher Spezifizität 96% (429/446) verglichen mit der PCA Methode (52%, 33/64; 98%, 435/446; p=0.011) und dem unbearbeiteten Perfusionsdatensatz (45%, 29/64; 98%, 438/446; p=0.003) in der visuellen Beurteilung von myokardialen Perfusionsdefiziten. Schlussfolgerungen. Der vorgeschlagene funktionale Ansatz zur simultanen Rauschreduktion und Bewegungskorrektur verbesserte Qualitätskriterien und Sensitivität von dynamischer CT Perfusion in der visuellen Erkennung von Myokardischämie

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondo’s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design

    Properties of Resolved Star Clusters in M51

    Full text link
    We present a study of compact star clusters in the nearby pair of interacting galaxies NGC 5194/95 (M51), based on multifilter Hubble Space Telescope WFPC2 archival images. We have detected ~400 isolated clusters. Our requirement that clusters be detected based only on their morphology results in the selection of relatively isolated objects, and we estimate that we are missing the majority (by a factor 4-6) of <10 Myr clusters due to crowding. Hence we focus on the cluster population older than 10 Myr. An age distribution shows a broad peak between 100-500 Myr, which is consistent with the crossing times of NGC 5195 through the NGC 5194 disk estimated in both single and multiple-passage dynamical models. We estimate that the peak contains approximately 2.2-2.5 times more clusters than expected from a constant rate of cluster formation over this time interval. We estimate the effective radii of our sample clusters and find a median value of 3-4 pc. Additionally, we see correlations of increasing cluster size with cluster mass (with a best fit slope of 0.14\pm0.03) at the 4sigma level, and with cluster age (0.06\pm0.02) at the 3sigma level. Finally, we report for the first time the discovery of faint, extended star clusters in the companion, NGC 5195, an SB0 galaxy. These have red [(V-I)>1.0] colors, effective radii >7 pc, and are scattered over the disk of NGC 5195. Our results indicate that NGC 5195 is therefore currently the third known barred lenticular galaxy to have formed so-called "faint fuzzy" star clusters. (abridged)Comment: 15 pages, 12 figures, 1 table; to appear in A

    Evaluating spatial and frequency domain enhancement techniques on dental images to assist dental implant therapy

    Get PDF
    Dental imaging provides the patient's anatomical details for the dental implant based on the maxillofacial structure and the two-dimensional geometric projection, helping clinical experts decide whether the implant surgery is suitable for a particular patient. Dental images often suffer from problems associated with random noise and low contrast factors, which need effective preprocessing operations. However, each enhancement technique comes with some advantages and limitations. Therefore, choosing a suitable image enhancement method always a difficult task. In this paper, a universal framework is proposed that integrates the functionality of various enhancement mechanisms so that dentists can select a suitable method of their own choice to improve the quality of dental image for the implant procedure. The proposed framework evaluates the effectiveness of both frequency domain enhancement and spatial domain enhancement techniques on dental images. The selection of the best enhancement method further depends on the output image perceptibility responses, peak signal-to-noise ratio (PSNR), and sharpness. The proposed framework offers a flexible and scalable approach to the dental expert to perform enhancement of a dental image according to visual image features and different enhancement requirements

    Instrumental measures for perceived contrast, noise and sharpness in fluoroscopic sequences:status report

    Get PDF

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Novel X-ray imaging technology enables significant patient dose reduction in interventional cardiology while maintaining diagnostic image quality

    Get PDF
    Objectives: The purpose of this study was to quantify the reduction in patient radiation dose during coronary angiography (CA) by a new X-ray technology, and to assess its impact on diagnostic image quality. Background: Recently, a novel X-ray imaging technology has become available for interventional cardiology, using advanced image processing and an optimized acquisition chain for radiation dose reduction. Methods: 70 adult patients were randomly assigned to a reference X-ray system or the novel X-ray system. Patient demographics were registered and exposure parameters were recorded for each radiation event. Clinical image quality was assessed for both patient groups. Results: With the same angiographic technique and a comparable patient population, the new imaging technology was associated with a 75% reduction in total kerma-area product (KAP) value (decrease from 47 Gycm(2) to 12 Gycm(2), P<0.001). Clinical image quality showed an equivalent detail and contrast for both imaging systems. On the other hand, the subjective appreciation of noise was more apparent in images of the new image processing system, acquired at lower doses, compared to the reference system. However, the higher noise content did not affect the overall image quality score, which was adequate for diagnosis in both systems. Conclusions: For the first time, we present a new X-ray imaging technology, combining advanced noise reduction algorithms and an optimized acquisition chain, which reduces patient radiation dose in CA drastically (75%), while maintaining diagnostic image quality. Use of this technology may further improve the radiation safety of cardiac angiography and interventions
    • …
    corecore