479 research outputs found

    Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding

    Get PDF
    The fetal echocardiogram is useful for monitoring and diagnosing cardiovascular diseases in the fetus in utero. Importantly, it can be used for assessing prenatal congenital heart disease, for which timely intervention can improve the unborn child's outcomes. In this regard, artificial intelligence (AI) can be used for the automatic analysis of fetal heart ultrasound images. This study reviews nondeep and deep learning approaches for assessing the fetal heart using standard four-chamber ultrasound images. The state-of-the-art techniques in the field are described and discussed. The compendium demonstrates the capability of automatic assessment of the fetal heart using AI technology. This work can serve as a resource for research in the field

    Quantitative planar and volumetric cardiac measurements using 64 mdct and 3t mri vs. Standard 2d and m-mode echocardiography: does anesthetic protocol matter?

    Get PDF
    Crossโ€sectional imaging of the heart utilizing computed tomography and magnetic resonance imaging (MRI) has been shown to be superior for the evaluation of cardiac morphology and systolic function in humans compared to echocardiography. The purpose of this prospective study was to test the effects of two different anesthetic protocols on cardiac measurements in 10 healthy beagle dogs using 64โ€multidetector row computed tomographic angiography (64โ€MDCTA), 3T magnetic resonance (MRI) and standard awake echocardiography. Both anesthetic protocols used propofol for induction and isoflourane for anesthetic maintenance. In addition, protocol A used midazolam/fentanyl and protocol B used dexmedetomedine as premedication and constant rate infusion during the procedure. Significant elevations in systolic and mean blood pressure were present when using protocol B. There was overall good agreement between the variables of cardiac size and systolic function generated from the MDCTA and MRI exams and no significant difference was found when comparing the variables acquired using either anesthetic protocol within each modality. Systolic function variables generated using 64โ€MDCTA and 3T MRI were only able to predict the left ventricular end diastolic volume as measured during awake echocardiogram when using protocol B and 64โ€MDCTA. For all other systolic function variables, prediction of awake echocardiographic results was not possible (P = 1). Planar variables acquired using MDCTA or MRI did not allow prediction of the corresponding measurements generated using echocardiography in the awake patients (P = 1). Future studies are needed to validate this approach in a more varied population and clinically affected dogs

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Label-free segmentation from cardiac ultrasound using self-supervised learning

    Full text link
    Segmentation and measurement of cardiac chambers is critical in cardiac ultrasound but is laborious and poorly reproducible. Neural networks can assist, but supervised approaches require the same laborious manual annotations. We built a pipeline for self-supervised (no manual labels) segmentation combining computer vision, clinical domain knowledge, and deep learning. We trained on 450 echocardiograms (93,000 images) and tested on 8,393 echocardiograms (4,476,266 images; mean 61 years, 51% female), using the resulting segmentations to calculate biometrics. We also tested against external images from an additional 10,030 patients with available manual tracings of the left ventricle. r2 between clinically measured and pipeline-predicted measurements were similar to reported inter-clinician variation and comparable to supervised learning across several different measurements (r2 0.56-0.84). Average accuracy for detecting abnormal chamber size and function was 0.85 (range 0.71-0.97) compared to clinical measurements. A subset of test echocardiograms (n=553) had corresponding cardiac MRIs, where MRI is the gold standard. Correlation between pipeline and MRI measurements was similar to that between clinical echocardiogram and MRI. Finally, the pipeline accurately segments the left ventricle with an average Dice score of 0.89 (95% CI [0.89]) in the external, manually labeled dataset. Our results demonstrate a manual-label free, clinically valid, and highly scalable method for segmentation from ultrasound, a noisy but globally important imaging modality.Comment: 37 pages, 3 Tables, 7 Figure

    An improved classification approach for echocardiograms embedding temporal information

    Get PDF
    Cardiovascular disease is an umbrella term for all diseases of the heart. At present, computer-aided echocardiogram diagnosis is becoming increasingly beneficial. For echocardiography, different cardiac views can be acquired depending on the location and angulations of the ultrasound transducer. Hence, the automatic echocardiogram view classification is the first step for echocardiogram diagnosis, especially for computer-aided system and even for automatic diagnosis in the future. In addition, heart views classification makes it possible to label images especially for large-scale echo videos, provide a facility for database management and collection. This thesis presents a framework for automatic cardiac viewpoints classification of echocardiogram video data. In this research, we aim to overcome the challenges facing this investigation while analyzing, recognizing and classifying echocardiogram videos from 3D (2D spatial and 1D temporal) space. Specifically, we extend 2D KAZE approach into 3D space for feature detection and propose a histogram of acceleration as feature descriptor. Subsequently, feature encoding follows before the application of SVM to classify echo videos. In addition, comparison with the state of the art methodologies also takes place, including 2D SIFT, 3D SIFT, and optical flow technique to extract temporal information sustained in the video images. As a result, the performance of 2D KAZE, 2D KAZE with Optical Flow, 3D KAZE, Optical Flow, 2D SIFT and 3D SIFT delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively for the eight view classes of echo videos

    ์ž„์ƒ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜์˜ ์‹ฌ์ดˆ์ŒํŒŒ ์ž๋™ํ•ด์„์— ๊ด€ํ•œ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ๋ฐ”์ด์˜ค์—”์ง€๋‹ˆ์–ด๋ง์ „๊ณต, 2022. 8. ๊น€ํฌ์ฐฌ.์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ์‹ฌ์žฅ๋ณ‘ ์ง„๋‹จ์— ์‚ฌ์šฉ๋˜๋Š” ์ค‘์š”ํ•œ ๋„๊ตฌ์ด๋ฉฐ, ์ˆ˜์ถ•๊ธฐ ๋ฐ ์ด์™„๊ธฐ ๋‹จ๊ณ„์˜ ์‹ฌ์žฅ ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋ฅผ ํ†ตํ•ด ์‹ฌ๋ฐฉ๊ณผ ์‹ฌ์‹ค์˜ ๋‹ค์–‘ํ•œ ๊ตฌ์กฐ์  ์ด์ƒ๊ณผ, ํŒ๋ง‰ ์ด์ƒ๋“ฑ์˜ ์งˆํ™˜์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๋˜๋Š” ์ •์„ฑ์ ์œผ๋กœ ์ง„๋‹จํ•  ์ˆ˜ ์žˆ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ๋น„์นจ์Šต์ ์ธ ํŠน์„ฑ์œผ๋กœ ์ธํ•˜์—ฌ์— ์‹ฌ์žฅ ์ „๋ฌธ์˜๋“ค์ด ๋งŽ์ด ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์‹ฌ์žฅ ์งˆํ™˜์ž๊ฐ€ ์ ์  ๋งŽ์•„์ง€๋Š” ์ถ”์„ธ์— ๋”ฐ๋ผ ๋” ๋งŽ์ด ์‚ฌ์šฉ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋˜๊ณ  ์žˆ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ์ด๋Ÿฌํ•œ ์•ˆ์ „์„ฑ๊ณผ ์œ ์šฉ์„ฑ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , CT๋‚˜ MRI์™€๋Š” ๋‹ฌ๋ฆฌ 1)์ •ํ™•ํ•œ ์˜์ƒ์„ ์–ป๋Š”๋ฐ ์˜ค๋žœ ํ›ˆ๋ จ๊ธฐ๊ฐ„์ด ํ•„์š”ํ•˜๊ณ  2) ์˜์ƒ์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ถ€์œ„์™€ ์–ป์„ ์ˆ˜ ์ž‡๋Š” ๋‹จ๋ฉด์˜์ƒ์ด ์ œํ•œ์ ์ด์–ด์„œ ๊ฒ€์‚ฌ ์‹œ ๋†“์นœ ์†Œ๊ฒฌ์€ ์ถ”ํ›„ ์˜์ƒ์„ ๊ฐ์ˆ˜ํ•  ๊ฒฝ์šฐ์—๋„ ๋ฐœ๊ฒฌํ•  ์ˆ˜ ์—†๋Š” ํŠน์ง•์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ด์— ๋‹ค๋ผ ์ธก์ •๊ณผ ํ•ด์„์˜ ์ •๋Ÿ‰ํ™”์™€ ํ•จ๊ป˜ ๊ฒ€์‚ฌ์ƒ ์ด์ƒ์†Œ๊ฒฌ์„ ๋†“์น˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ๋Š” ๋ณด์™„์กฐ์น˜์— ๋Œ€ํ•œ ์š”๊ตฌ๊ฐ€ ๋งŽ์•˜๊ณ , ์ด๋Ÿฌํ•œ ์š”๊ตฌ์— ๋ถ€์‘ํ•˜์—ฌ ์‹ฌ์žฅ์ „๋ฌธ์˜๋ฅผ ์œ„ํ•œ ์ž„์ƒ ์˜์‚ฌ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ๋งŽ์€ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค.. ์ธ๊ณต์ง€๋Šฅ์˜ ๋ฐœ๋‹ฌ๋กœ ์ธํ•ด ์–ด๋Š์ •๋„ ์ด๋Ÿฌํ•œ ์š”๊ตฌ์— ๋ถ€์‘ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ๋‹ค. ์ด ์—ฐ๊ตฌ์˜ ํ๋ฆ„์€ ๋‘๊ฐ€์ง€๋กœ ๋‚˜๋‰˜๊ฒŒ ๋˜๋Š”๋ฐ, ์ฒซ์งธ๋Š” ์‹ฌ์žฅ์˜ ๊ตฌ์กฐ๋ฌผ๋“ค์„ ๋ถ„ํ• ํ•˜์—ฌ ํฌ๊ธฐ๋ฅผ ์ธก์ •ํ•˜๊ณ  ํŠน์ด์น˜๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ •๋Ÿ‰์ ์ธ ์—ฐ๊ตฌ๋ฐฉ๋ฒ•๊ณผ, ๋ณ‘๋ณ€์ด ์–ด๋Š ๋ถ€์œ„์— ์žˆ๋Š”์ง€ ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ํ™•์ธํ•˜๋Š” ์ •์„ฑ์  ์ ‘๊ทผ๋ฒ•์œผ๋กœ ๋‚˜๋‰œ๋‹ค. ๊ธฐ์กด์—๋Š” ์ด ๋‘ ์—ฐ๊ตฌ๊ฐ€ ๋Œ€๋ถ€๋ถ„ ๋”ฐ๋กœ ์ง„ํ–‰๋˜์–ด ์™”์œผ๋‚˜, ์ž„์ƒ์˜์‚ฌ์˜ ์ง„๋‹จ ํ๋ฆ„์„ ๊ณ ๋ คํ•ด ๋ณผ ๋•Œ ์ด ๋‘๊ฐ€์ง€ ๋ชจ๋‘๊ฐ€ ํฌํ•จ๋˜๋Š” ์ž„์ƒ ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์˜ ๊ฐœ๋ฐœ์ด ํ•„์š”ํ•œ ํ˜„์‹ค์ด๋‹ค. ์ด๋Ÿฌํ•œ ๊ด€์ ์—์„œ ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์˜ ๋ชฉํ‘œ๋Š” ๋Œ€๊ทœ๋ชจ ์ฝ”ํ˜ธํŠธ ํ›„ํ–ฅ์  ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด AI ๊ธฐ๋ฐ˜์˜ ์‹ฌ์žฅ ์ดˆ์ŒํŒŒ ์ž„์ƒ ์˜์‚ฌ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ๊ฒ€์ฆํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ฐ์ดํ„ฐ๋Š” 2016๋…„์—์„œ 2021๋…„๋„ ์‚ฌ์ด์— ์„œ์šธ๋Œ€ ๋ณ‘์›์—์„œ ์‹œํ–‰๋œ 2600์˜ˆ์˜ ์‹ฌ์ดˆ์ŒํŒŒ๊ฒ€์‚ฌ ์˜์ƒ(์ •์ƒ์†Œ๊ฒฌ1300๋ช…, ๋ณ‘์ ์†Œ๊ฒฌ 1300๋ช…)๋ฅผ ์ด์šฉํ•˜์˜€๋‹ค. ์ •๋Ÿ‰์ ๋ถ„์„๊ณผ ์ •์„ฑ์  ๋ถ„์„์„ ๋ชจ๋‘ ๊ณ ๋ คํ•˜๊ธฐ ์œ„ํ•ด ๋‘๊ฐœ์˜ ๋„คํŠธ์›Œํฌ๊ฐ€ ๊ฐœ๋ฐœ๋˜์—ˆ์œผ๋ฉฐ, ๊ทธ ์œ ํšจ์„ฑ์€ ํ™˜์ž ๋ฐ์ดํ„ฐ๋กœ ๊ฒ€์ฆ๋˜์—ˆ๋‹ค. ๋จผ์ € ์ •๋Ÿ‰์  ๋ถ„์„์„ ์œ„ํ•œ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•ด U-net ๊ธฐ๋ฐ˜ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๊ฐ€ ๊ฐœ๋ฐœ๋˜์—ˆ์œผ๋ฉฐ, ๊ฐœ๋ฐœ์— ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•ด ์‹ฌ์žฅ์ „๋ฌธ์˜๊ฐ€ ์ขŒ์‹ฌ์‹ค, ์ขŒ์‹ฌ๋ฐฉ, ๋Œ€๋™๋งฅ, ์šฐ์‹ฌ์‹ค, ์ขŒ์‹ฌ์‹ค ํ›„๋ฒฝ ๋ฐ ์‹ฌ์‹ค๊ฐ„ ์ค‘๊ฒฉ์˜ ์ •๋ณด๋ฅผ ์ด๋ฏธ์ง€์— ํ‘œ์‹œ๋ฅผ ํ•˜์˜€๋‹ค. ํ›ˆ๋ จ๋œ ๋„คํŠธ์›Œํฌ๋กœ๋ถ€ํ„ฐ ๋‚˜์˜จ ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ 6๊ฐœ์˜ ๊ตฌ์กฐ๋ฌผ์˜ ์ง๊ฒฝ๊ณผ ๋ฉด์ ์„ ๊ตฌํ•˜์—ฌ ๋ฒกํ„ฐํ™” ํ•˜์˜€์œผ๋ฉฐ, ์ˆ˜์ถ•๊ธฐ๋ง ๋ฐ ์ด์™„๊ธฐ๋ง ๋‹จ๊ณ„์˜ ํ”„๋ ˆ์ž„ ์ •๋ณด๋ฅผ ๋ฒกํ„ฐ๋กœ๋ถ€ํ„ฐ ์ถ”์ถœํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ ์ •์„ฑ์  ์ง„๋‹จ์„ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ ๊ฐœ๋ฐœ์„ ์œ„ํ•ด Resnet152 ๊ธฐ๋ฐ˜์˜ CNN์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์ด ๋„คํŠธ์›Œํฌ์˜ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋Š” ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ์—์„œ ์ถ”์ถœ๋œ ์ˆ˜์ถ•๊ธฐ๋ง ๋ฐ ์ด์™„๊ธฐ๋ง ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ 10ํ”„๋ ˆ์ž„์ด ์ถ”์ถœ๋˜์—ˆ๋‹ค. ์ž…๋ ฅ๋ฐ์ดํ„ฐ๊ฐ€ ์ •์ƒ์ธ์ง€ ์•„๋‹Œ์ง€ ๊ตฌ๋ถ„ํ•˜๋„๋ก ํ–ˆ์„ ๋ฟ ์•„๋‹ˆ๋ผ, ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด์—์„œ ๊ทธ๋ผ๋””์–ธํŠธ ๊ฐ€์ค‘ ํด๋ž˜์Šค ํ™œ์„ฑํ™” ๋งคํ•‘(Grad-CAM)๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ๊ฐ€ ์ด๋ฏธ์ง€์ƒ์˜ ์–ด๋Š ๋ถ€์œ„๋ฅผ ๋ณด๊ณ  ์ด์ƒ์†Œ๊ฒฌ์œผ๋กœ ๋ถ„๋ฅ˜ํ–ˆ๋Š”์ง€ ์‹œ๊ฐํ™” ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ ๋จผ์ € ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ธฐ ์œ„ํ•ด ํ™˜์ž 1300๋ช…์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ๊ฐ ๊ตฌ์กฐ๋ฌผ์˜ ์ง๊ฒฝ๊ณผ ๊ด€๋ จ๋œ ์‹ฌ์žฅ์งˆํ™˜์ด ์–ผ๋งˆ๋‚˜ ์ž˜ ๊ฒ€์ถœ๋๋Š”์ง€ ํ™•์ธํ•˜์˜€๋‹ค. ์‹ฌ์‹ค์ค‘๊ฒฉ, ์ขŒ์‹ฌ์‹ค ํ›„๋ฒฝ, ๋Œ€๋™๋งฅ๊ณผ ๊ด€๋ จ๋œ ๋ณ‘์ ์†Œ๊ฒฌ์„ ์ œ์™ธํ•˜๊ณ  ๋‹ค๋ฅธ๊ตฌ์กฐ๋ฌผ์˜ ๋ฏผ๊ฐ๋„์™€ ํŠน์ด์„ฑ์€ ๋ชจ๋‘ 90% ์ด์ƒ์ด๋‹ค. ์ˆ˜์ถ•๊ธฐ ๋ง๊ธฐ ๋ฐ ํ™•์žฅ๊ธฐ ๋ง๊ธฐ ์œ„์ƒ ๊ฒ€์ถœ๋„ ์ •ํ™•ํ–ˆ๋Š”๋ฐ, ์‹ฌ์žฅ์ „๋ฌธ์˜์— ์˜ํ•ด ์„ ํƒ๋œ ํ”„๋ ˆ์ž„์— ๋น„ํ•˜์—ฌ ์ˆ˜์ถ•๊ธฐ ๋ง๊ธฐ์˜ ๊ฒฝ์šฐ ํ‰๊ท  0.52 ํ”„๋ ˆ์ž„, ํ™•์žฅ๊ธฐ ๋ง๊ธฐ์˜ ๊ฒฝ์šฐ 0.9 ํ”„๋ ˆ์ž„์˜ ์ฐจ์ด๋ฅผ ๋ณด์˜€๋‹ค. ์ •์„ฑ๋ถ„์„์„ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ์˜ ๊ฒฝ์šฐ, ์ฒซ ๋ฒˆ์งธ ๋„คํŠธ์›Œํฌ๋กœ๋ถ€ํ„ฐ ์„ ํƒ๋œ ์œ„์ƒ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ 10๊ฐœ์˜ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋ฅผ ๊ฒฐ์ •ํ•˜์˜€๊ณ , ๋ฌด์ž‘์œ„๋กœ ์„ ํƒ๋œ 10๊ฐœ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋น„๊ตํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ ์ •ํ™•๋„๊ฐ€ ๊ฐ๊ฐ 90.33%, 81.16%๋กœ ๋‚˜ํƒ€๋‚ฌ์œผ๋ฉฐ, 1์ฐจ ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ ์—์„œ ์ถ”์ถœ๋œ ์ˆ˜์ถ•๊ธฐ๋ง, ์ด์™„๊ธฐ๋ง ํ”„๋ ˆ์ž„ ์ •๋ณด๋Š” ํ™˜์ž๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๋„คํŠธ์›Œํฌ์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ๊ธฐ์—ฌํ–ˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ Grad-CAM ๊ฒฐ๊ณผ๋Š” ์ฒซ ๋ฒˆ์งธ ๋„คํŠธ์›Œํฌ์˜ ํ”„๋ ˆ์ž„ ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฐ์ดํ„ฐ์—์„œ ์ถ”์ถœ๋œ10 ์žฅ์˜ ์ด๋ฏธ์ง€๊ฐ€ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋กœ ์“ฐ์˜€์„ ๋•Œ๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ถ”์ถœ๋œ 10์žฅ์˜ ์ด๋ฏธ์ง€๋กœ ํ›ˆ๋ จ๋œ ๋„คํŠธ์›Œํฌ ๋ณด๋‹ค ๋ณ‘๋ณ€์˜ ์œ„์น˜๋ฅผ ๋” ์ •ํ™•ํ•˜๊ฒŒ ํ‘œ์‹œํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ฒฐ๋ก ์ ์œผ๋กœ ๋ณธ ์—ฐ๊ตฌ๋Š” ์ •๋Ÿ‰์ , ์ •์„ฑ์  ๋ถ„์„์„ ์œ„ํ•œ AI ๊ธฐ๋ฐ˜ ์‹ฌ์žฅ ์ดˆ์ŒํŒŒ ์ž„์ƒ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•˜์˜€์œผ๋ฉฐ, ์ด ์‹œ์Šคํ…œ์ด ์‹คํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒƒ์œผ๋กœ ๊ฒ€์ฆ๋˜์—ˆ๋‹ค.Echocardiography is an indispensable tool for cardiologists in the diagnosis of heart diseases. By echocardiography, various structural abnormalities in the heart can be quantitatively or qualitatively diagnosed. Due to its non-invasiveness, the usage of echocardiography in the diagnosis of heart disease has continuously increased. Despite the increasing role in the cardiology practice, echocardiography requires experience in capturing and knowledge in interpreting images. . Moreover, in contrast to CT or MRI images, important information can be missed once not obtained at the time of examination. Therefore, obtaining and interpreting images should be done simultaneously, or, at least, all obtained images should be audited by the experienced cardiologist before releasing the patient from the examination booth. Because of the peculiar characteristics of echocardiography compared to CT or MRI, there have been incessant demands for the clinical decision support system(CDSS) for echocardiography. With the advance of Artificial Intelligence (AI), there have been several studies regarding decision support systems for echocardiography. The flow of these studies is divided into two approaches: One is the quantitative approach to segment the images and detects an abnormality in size and function. The other is the qualitative approach to detect abnormality in morphology. Unfortunately, most of these two studies have been conducted separately. However, since cardiologists perform quantitative and qualitative analysis simultaneously in analyzing echocardiography, an optimal CDSS needs to be a combination of these two approaches. From this point of view, this study aims to develop and validate an AI-based CDSS for echocardiograms through a large-scale retrospective cohort. Echocardiographic data of 2,600 patients who visited Seoul National University Hospital (1300 cardiac patients and 1300 non-cardiac patients with normal echocardiogram) between 2016 and 2021. Two networks were developed for the quantitative and qualitative analysis, and their usefulnesses were verified with the patient data. First, a U-net based deep learning network was developed for segmentation in the quantitative analysis. Annotated images by the experienced cardiologist with the left ventricle, interventricular septum, left ventricular posterior wall, right ventricle, aorta, and left atrium, were used for training. The diameters and areas of the six structures were obtained and vectorized from the segmentation images, and the frame information at the end-systolic and end-diastolic phases was extracted from the vector. The second network for the qualitative diagnosis was developed using a convolutional neural network (CNN) based on Resnet 152. The input data of this network was extracted from 10 frames of each patient based on end-diastolic and end-systolic phase information extracted from the quantitative network. The network not only distinguished the input data between normal and abnormal but also visualized the location of the abnormality on the image through the Gradient-weighted Class Activation Mapping (Grad-CAM) at the last layer. The performance of the quantitative network in the chamber size and function measurements was assessed in 1300 patients. Sensitivity and specificity were both over 90% except for pathologies related to the left ventricular posterior wall, interventricular septum, and aorta. The end-systolic and end-diastolic phase detection was also accurate, with an average difference of 0.52 frames for the end-systolic and 0.9 frames for the end-diastolic phases. In the case of the network for qualitative analysis, 10 input data were selected based on the phase information determined from the first network, and the results of 10 randomly selected images were compared. As a result, the accuracy was 90.3% and 81.2%, respectively, and the phase information selected from the first network contributed to the improvement of the performance of the network. Also, the results of Grad-CAM confirmed that the network trained with 10 images of data extracted based on the phase information from the first network displays the location of the lesion more accurately than the network trained with 10 randomly selected data. In conclusion, this study proposed an AI-based CDSS for echocardiography in the quantitative and qualitative analysis.ABSTRACT ๏ผ‘ CONTENTS v LIST OF TABLES vii LIST OF FIGURES viii CHAPTER 1 1 Introduction 1 1. Introduction 2 1.1. Echocardiogram 2 1.1.1. Diagnosis using Echocardiogram 2 1.1.2. Limitation in Echocardiogram 3 1.1.3. Artificial Intelligence in Echocardiogram 6 1.2. Clinical Background 7 1.2.1. Diagnostic Flow 8 1.2.2. Previous studies and clinical implication of this study 11 1.3. Technical Background 16 1.3.1. Convolutional Neural Network (CNN) 16 1.3.1.1. U-net 18 1.3.1.2. Residual Network 20 1.3.1.3. Gradient-weighted Class Activation Mapping (Grad-CAM) 22 1.4. Unmet Clinical Needs 26 1.5. Objective 27 CHAPTER 2 28 Materials & Methods 28 2. Materials & Methods 29 2.1. Data Description 29 2.2. Annotated Data 32 2.3. Overall Architecture 33 2.3.1. Quantitative Network 35 2.3.2. Qualitative Network 37 2.4. Dice Similarity Score 39 2.5. Intersection over Union 40 CHAPTER 3 41 3. Results & Discussion 42 3.1. Quantitative Network Result 42 3.1.1. Diagnostic results 47 3.1.2. Phase Detection Result 49 3.2. Qualitative Network Results 51 3.2.1. Grad-CAM Result 56 3.3. Limitation 58 3.3.1. Need for external dataset for generalizable network 58 3.3.2. Futurework of the system 59 CHAPTER 4 60 4. Conclusion 61 Abstract in Korean 62 Bibliography 65๋ฐ•

    Virtual Reality applied to biomedical engineering

    Get PDF
    Actualment, la realitat virtual esta sent tendรจncia i s'estร  expandint a l'ร mbit mรจdic, fent possible l'apariciรณ de nombroses aplicacions dissenyades per entrenar metges i tractar pacients de forma mรฉs eficient, aixรญ com optimitzar els processos de planificaciรณ quirรบrgica. La necessitat mรจdica i objectiu d'aquest projecte รฉs fer รฒptim el procรฉs de planificaciรณ quirรบrgica per a cardiopaties congรจnites, que compren la reconstrucciรณ en 3D del cor del pacient i la seva integraciรณ en una aplicaciรณ de realitat virtual. Seguint aquesta lรญnia sโ€™ha combinat un procรฉs de modelat 3D dโ€™imatges de cors obtinguts gracies al Hospital Sant Joan de Dรฉu i el disseny de lโ€™aplicaciรณ mitjanรงant el software Unity 3D gracies a lโ€™empresa VISYON. S'han aconseguit millores en quant al software emprat per a la segmentaciรณ i reconstrucciรณ, i sโ€™han assolit funcionalitats bร siques a lโ€™aplicaciรณ com importar, moure, rotar i fer captures de pantalla en 3D de l'รฒrgan cardรญac i aixรญ, entendre millor la cardiopatia que sโ€™ha de tractar. El resultat ha estat la creaciรณ d'un procรฉs รฒptim, en el que la reconstrucciรณ en 3D ha aconseguit ser rร pida i precisa, el mรจtode dโ€™importaciรณ a lโ€™app dissenyada molt senzill, i una aplicaciรณ que permet una interacciรณ atractiva i intuรฏtiva, gracies a una experiรจncia immersiva i realista per ajustar-se als requeriments d'eficiรจncia i precisiรณ exigits en el camp mรจdic
    • โ€ฆ
    corecore