21 research outputs found

    Automated Performance Assessment in Transoesophageal Echocardiography with Convolutional Neural Networks

    Get PDF
    Transoesophageal echocardiography (TEE) is a valuable diagnostic and monitoring imaging modality. Proper image acquisition is essential for diagnosis, yet current assessment techniques are solely based on manual expert review. This paper presents a supervised deep learning framework for automatically evaluating and grading the quality of TEE images. To obtain the necessary dataset, 38 participants of varied experience performed TEE exams with a high-fidelity virtual reality (VR) platform. Two Convolutional Neural Network (CNN) architectures, AlexNet and VGG, structured to perform regression, were finetuned and validated on manually graded images from three evaluators. Two different scoring strategies, a criteria-based percentage and an overall general impression, were used. The developed CNN models estimate the average score with a root mean square accuracy ranging between 84% โˆ’ 93%, indicating the ability to replicate expert valuation. Proposed strategies for automated TEE assessment can have a significant impact on the training process of new TEE operators, providing direct feedback and facilitating the development of the necessary dexterous skills

    Computer Vision in the Surgical Operating Room

    Get PDF
    Background: Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary: In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages: With the increasing availability of surgical video sources and the convergence of technologiesaround video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic

    ์ž„์ƒ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜์˜ ์‹ฌ์ดˆ์ŒํŒŒ ์ž๋™ํ•ด์„์— ๊ด€ํ•œ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ๋ฐ”์ด์˜ค์—”์ง€๋‹ˆ์–ด๋ง์ „๊ณต, 2022. 8. ๊น€ํฌ์ฐฌ.์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ์‹ฌ์žฅ๋ณ‘ ์ง„๋‹จ์— ์‚ฌ์šฉ๋˜๋Š” ์ค‘์š”ํ•œ ๋„๊ตฌ์ด๋ฉฐ, ์ˆ˜์ถ•๊ธฐ ๋ฐ ์ด์™„๊ธฐ ๋‹จ๊ณ„์˜ ์‹ฌ์žฅ ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋ฅผ ํ†ตํ•ด ์‹ฌ๋ฐฉ๊ณผ ์‹ฌ์‹ค์˜ ๋‹ค์–‘ํ•œ ๊ตฌ์กฐ์  ์ด์ƒ๊ณผ, ํŒ๋ง‰ ์ด์ƒ๋“ฑ์˜ ์งˆํ™˜์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๋˜๋Š” ์ •์„ฑ์ ์œผ๋กœ ์ง„๋‹จํ•  ์ˆ˜ ์žˆ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ๋น„์นจ์Šต์ ์ธ ํŠน์„ฑ์œผ๋กœ ์ธํ•˜์—ฌ์— ์‹ฌ์žฅ ์ „๋ฌธ์˜๋“ค์ด ๋งŽ์ด ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์‹ฌ์žฅ ์งˆํ™˜์ž๊ฐ€ ์ ์  ๋งŽ์•„์ง€๋Š” ์ถ”์„ธ์— ๋”ฐ๋ผ ๋” ๋งŽ์ด ์‚ฌ์šฉ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋˜๊ณ  ์žˆ๋‹ค. ์‹ฌ์ดˆ์ŒํŒŒ ๊ฒ€์‚ฌ๋Š” ์ด๋Ÿฌํ•œ ์•ˆ์ „์„ฑ๊ณผ ์œ ์šฉ์„ฑ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , CT๋‚˜ MRI์™€๋Š” ๋‹ฌ๋ฆฌ 1)์ •ํ™•ํ•œ ์˜์ƒ์„ ์–ป๋Š”๋ฐ ์˜ค๋žœ ํ›ˆ๋ จ๊ธฐ๊ฐ„์ด ํ•„์š”ํ•˜๊ณ  2) ์˜์ƒ์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ถ€์œ„์™€ ์–ป์„ ์ˆ˜ ์ž‡๋Š” ๋‹จ๋ฉด์˜์ƒ์ด ์ œํ•œ์ ์ด์–ด์„œ ๊ฒ€์‚ฌ ์‹œ ๋†“์นœ ์†Œ๊ฒฌ์€ ์ถ”ํ›„ ์˜์ƒ์„ ๊ฐ์ˆ˜ํ•  ๊ฒฝ์šฐ์—๋„ ๋ฐœ๊ฒฌํ•  ์ˆ˜ ์—†๋Š” ํŠน์ง•์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ด์— ๋‹ค๋ผ ์ธก์ •๊ณผ ํ•ด์„์˜ ์ •๋Ÿ‰ํ™”์™€ ํ•จ๊ป˜ ๊ฒ€์‚ฌ์ƒ ์ด์ƒ์†Œ๊ฒฌ์„ ๋†“์น˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ๋Š” ๋ณด์™„์กฐ์น˜์— ๋Œ€ํ•œ ์š”๊ตฌ๊ฐ€ ๋งŽ์•˜๊ณ , ์ด๋Ÿฌํ•œ ์š”๊ตฌ์— ๋ถ€์‘ํ•˜์—ฌ ์‹ฌ์žฅ์ „๋ฌธ์˜๋ฅผ ์œ„ํ•œ ์ž„์ƒ ์˜์‚ฌ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ๋งŽ์€ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค.. ์ธ๊ณต์ง€๋Šฅ์˜ ๋ฐœ๋‹ฌ๋กœ ์ธํ•ด ์–ด๋Š์ •๋„ ์ด๋Ÿฌํ•œ ์š”๊ตฌ์— ๋ถ€์‘ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ๋‹ค. ์ด ์—ฐ๊ตฌ์˜ ํ๋ฆ„์€ ๋‘๊ฐ€์ง€๋กœ ๋‚˜๋‰˜๊ฒŒ ๋˜๋Š”๋ฐ, ์ฒซ์งธ๋Š” ์‹ฌ์žฅ์˜ ๊ตฌ์กฐ๋ฌผ๋“ค์„ ๋ถ„ํ• ํ•˜์—ฌ ํฌ๊ธฐ๋ฅผ ์ธก์ •ํ•˜๊ณ  ํŠน์ด์น˜๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ •๋Ÿ‰์ ์ธ ์—ฐ๊ตฌ๋ฐฉ๋ฒ•๊ณผ, ๋ณ‘๋ณ€์ด ์–ด๋Š ๋ถ€์œ„์— ์žˆ๋Š”์ง€ ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ํ™•์ธํ•˜๋Š” ์ •์„ฑ์  ์ ‘๊ทผ๋ฒ•์œผ๋กœ ๋‚˜๋‰œ๋‹ค. ๊ธฐ์กด์—๋Š” ์ด ๋‘ ์—ฐ๊ตฌ๊ฐ€ ๋Œ€๋ถ€๋ถ„ ๋”ฐ๋กœ ์ง„ํ–‰๋˜์–ด ์™”์œผ๋‚˜, ์ž„์ƒ์˜์‚ฌ์˜ ์ง„๋‹จ ํ๋ฆ„์„ ๊ณ ๋ คํ•ด ๋ณผ ๋•Œ ์ด ๋‘๊ฐ€์ง€ ๋ชจ๋‘๊ฐ€ ํฌํ•จ๋˜๋Š” ์ž„์ƒ ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์˜ ๊ฐœ๋ฐœ์ด ํ•„์š”ํ•œ ํ˜„์‹ค์ด๋‹ค. ์ด๋Ÿฌํ•œ ๊ด€์ ์—์„œ ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์˜ ๋ชฉํ‘œ๋Š” ๋Œ€๊ทœ๋ชจ ์ฝ”ํ˜ธํŠธ ํ›„ํ–ฅ์  ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด AI ๊ธฐ๋ฐ˜์˜ ์‹ฌ์žฅ ์ดˆ์ŒํŒŒ ์ž„์ƒ ์˜์‚ฌ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ๊ฒ€์ฆํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ฐ์ดํ„ฐ๋Š” 2016๋…„์—์„œ 2021๋…„๋„ ์‚ฌ์ด์— ์„œ์šธ๋Œ€ ๋ณ‘์›์—์„œ ์‹œํ–‰๋œ 2600์˜ˆ์˜ ์‹ฌ์ดˆ์ŒํŒŒ๊ฒ€์‚ฌ ์˜์ƒ(์ •์ƒ์†Œ๊ฒฌ1300๋ช…, ๋ณ‘์ ์†Œ๊ฒฌ 1300๋ช…)๋ฅผ ์ด์šฉํ•˜์˜€๋‹ค. ์ •๋Ÿ‰์ ๋ถ„์„๊ณผ ์ •์„ฑ์  ๋ถ„์„์„ ๋ชจ๋‘ ๊ณ ๋ คํ•˜๊ธฐ ์œ„ํ•ด ๋‘๊ฐœ์˜ ๋„คํŠธ์›Œํฌ๊ฐ€ ๊ฐœ๋ฐœ๋˜์—ˆ์œผ๋ฉฐ, ๊ทธ ์œ ํšจ์„ฑ์€ ํ™˜์ž ๋ฐ์ดํ„ฐ๋กœ ๊ฒ€์ฆ๋˜์—ˆ๋‹ค. ๋จผ์ € ์ •๋Ÿ‰์  ๋ถ„์„์„ ์œ„ํ•œ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•ด U-net ๊ธฐ๋ฐ˜ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๊ฐ€ ๊ฐœ๋ฐœ๋˜์—ˆ์œผ๋ฉฐ, ๊ฐœ๋ฐœ์— ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•ด ์‹ฌ์žฅ์ „๋ฌธ์˜๊ฐ€ ์ขŒ์‹ฌ์‹ค, ์ขŒ์‹ฌ๋ฐฉ, ๋Œ€๋™๋งฅ, ์šฐ์‹ฌ์‹ค, ์ขŒ์‹ฌ์‹ค ํ›„๋ฒฝ ๋ฐ ์‹ฌ์‹ค๊ฐ„ ์ค‘๊ฒฉ์˜ ์ •๋ณด๋ฅผ ์ด๋ฏธ์ง€์— ํ‘œ์‹œ๋ฅผ ํ•˜์˜€๋‹ค. ํ›ˆ๋ จ๋œ ๋„คํŠธ์›Œํฌ๋กœ๋ถ€ํ„ฐ ๋‚˜์˜จ ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ 6๊ฐœ์˜ ๊ตฌ์กฐ๋ฌผ์˜ ์ง๊ฒฝ๊ณผ ๋ฉด์ ์„ ๊ตฌํ•˜์—ฌ ๋ฒกํ„ฐํ™” ํ•˜์˜€์œผ๋ฉฐ, ์ˆ˜์ถ•๊ธฐ๋ง ๋ฐ ์ด์™„๊ธฐ๋ง ๋‹จ๊ณ„์˜ ํ”„๋ ˆ์ž„ ์ •๋ณด๋ฅผ ๋ฒกํ„ฐ๋กœ๋ถ€ํ„ฐ ์ถ”์ถœํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ ์ •์„ฑ์  ์ง„๋‹จ์„ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ ๊ฐœ๋ฐœ์„ ์œ„ํ•ด Resnet152 ๊ธฐ๋ฐ˜์˜ CNN์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์ด ๋„คํŠธ์›Œํฌ์˜ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋Š” ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ์—์„œ ์ถ”์ถœ๋œ ์ˆ˜์ถ•๊ธฐ๋ง ๋ฐ ์ด์™„๊ธฐ๋ง ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ 10ํ”„๋ ˆ์ž„์ด ์ถ”์ถœ๋˜์—ˆ๋‹ค. ์ž…๋ ฅ๋ฐ์ดํ„ฐ๊ฐ€ ์ •์ƒ์ธ์ง€ ์•„๋‹Œ์ง€ ๊ตฌ๋ถ„ํ•˜๋„๋ก ํ–ˆ์„ ๋ฟ ์•„๋‹ˆ๋ผ, ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด์—์„œ ๊ทธ๋ผ๋””์–ธํŠธ ๊ฐ€์ค‘ ํด๋ž˜์Šค ํ™œ์„ฑํ™” ๋งคํ•‘(Grad-CAM)๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ๊ฐ€ ์ด๋ฏธ์ง€์ƒ์˜ ์–ด๋Š ๋ถ€์œ„๋ฅผ ๋ณด๊ณ  ์ด์ƒ์†Œ๊ฒฌ์œผ๋กœ ๋ถ„๋ฅ˜ํ–ˆ๋Š”์ง€ ์‹œ๊ฐํ™” ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ ๋จผ์ € ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ธฐ ์œ„ํ•ด ํ™˜์ž 1300๋ช…์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•ด ๊ฐ ๊ตฌ์กฐ๋ฌผ์˜ ์ง๊ฒฝ๊ณผ ๊ด€๋ จ๋œ ์‹ฌ์žฅ์งˆํ™˜์ด ์–ผ๋งˆ๋‚˜ ์ž˜ ๊ฒ€์ถœ๋๋Š”์ง€ ํ™•์ธํ•˜์˜€๋‹ค. ์‹ฌ์‹ค์ค‘๊ฒฉ, ์ขŒ์‹ฌ์‹ค ํ›„๋ฒฝ, ๋Œ€๋™๋งฅ๊ณผ ๊ด€๋ จ๋œ ๋ณ‘์ ์†Œ๊ฒฌ์„ ์ œ์™ธํ•˜๊ณ  ๋‹ค๋ฅธ๊ตฌ์กฐ๋ฌผ์˜ ๋ฏผ๊ฐ๋„์™€ ํŠน์ด์„ฑ์€ ๋ชจ๋‘ 90% ์ด์ƒ์ด๋‹ค. ์ˆ˜์ถ•๊ธฐ ๋ง๊ธฐ ๋ฐ ํ™•์žฅ๊ธฐ ๋ง๊ธฐ ์œ„์ƒ ๊ฒ€์ถœ๋„ ์ •ํ™•ํ–ˆ๋Š”๋ฐ, ์‹ฌ์žฅ์ „๋ฌธ์˜์— ์˜ํ•ด ์„ ํƒ๋œ ํ”„๋ ˆ์ž„์— ๋น„ํ•˜์—ฌ ์ˆ˜์ถ•๊ธฐ ๋ง๊ธฐ์˜ ๊ฒฝ์šฐ ํ‰๊ท  0.52 ํ”„๋ ˆ์ž„, ํ™•์žฅ๊ธฐ ๋ง๊ธฐ์˜ ๊ฒฝ์šฐ 0.9 ํ”„๋ ˆ์ž„์˜ ์ฐจ์ด๋ฅผ ๋ณด์˜€๋‹ค. ์ •์„ฑ๋ถ„์„์„ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ์˜ ๊ฒฝ์šฐ, ์ฒซ ๋ฒˆ์งธ ๋„คํŠธ์›Œํฌ๋กœ๋ถ€ํ„ฐ ์„ ํƒ๋œ ์œ„์ƒ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ 10๊ฐœ์˜ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋ฅผ ๊ฒฐ์ •ํ•˜์˜€๊ณ , ๋ฌด์ž‘์œ„๋กœ ์„ ํƒ๋œ 10๊ฐœ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋น„๊ตํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ ์ •ํ™•๋„๊ฐ€ ๊ฐ๊ฐ 90.33%, 81.16%๋กœ ๋‚˜ํƒ€๋‚ฌ์œผ๋ฉฐ, 1์ฐจ ์ •๋Ÿ‰์  ๋„คํŠธ์›Œํฌ ์—์„œ ์ถ”์ถœ๋œ ์ˆ˜์ถ•๊ธฐ๋ง, ์ด์™„๊ธฐ๋ง ํ”„๋ ˆ์ž„ ์ •๋ณด๋Š” ํ™˜์ž๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๋„คํŠธ์›Œํฌ์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ๊ธฐ์—ฌํ–ˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ Grad-CAM ๊ฒฐ๊ณผ๋Š” ์ฒซ ๋ฒˆ์งธ ๋„คํŠธ์›Œํฌ์˜ ํ”„๋ ˆ์ž„ ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฐ์ดํ„ฐ์—์„œ ์ถ”์ถœ๋œ10 ์žฅ์˜ ์ด๋ฏธ์ง€๊ฐ€ ์ž…๋ ฅ๋ฐ์ดํ„ฐ๋กœ ์“ฐ์˜€์„ ๋•Œ๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ถ”์ถœ๋œ 10์žฅ์˜ ์ด๋ฏธ์ง€๋กœ ํ›ˆ๋ จ๋œ ๋„คํŠธ์›Œํฌ ๋ณด๋‹ค ๋ณ‘๋ณ€์˜ ์œ„์น˜๋ฅผ ๋” ์ •ํ™•ํ•˜๊ฒŒ ํ‘œ์‹œํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ฒฐ๋ก ์ ์œผ๋กœ ๋ณธ ์—ฐ๊ตฌ๋Š” ์ •๋Ÿ‰์ , ์ •์„ฑ์  ๋ถ„์„์„ ์œ„ํ•œ AI ๊ธฐ๋ฐ˜ ์‹ฌ์žฅ ์ดˆ์ŒํŒŒ ์ž„์ƒ์˜์‚ฌ ๊ฒฐ์ • ์ง€์› ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•˜์˜€์œผ๋ฉฐ, ์ด ์‹œ์Šคํ…œ์ด ์‹คํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒƒ์œผ๋กœ ๊ฒ€์ฆ๋˜์—ˆ๋‹ค.Echocardiography is an indispensable tool for cardiologists in the diagnosis of heart diseases. By echocardiography, various structural abnormalities in the heart can be quantitatively or qualitatively diagnosed. Due to its non-invasiveness, the usage of echocardiography in the diagnosis of heart disease has continuously increased. Despite the increasing role in the cardiology practice, echocardiography requires experience in capturing and knowledge in interpreting images. . Moreover, in contrast to CT or MRI images, important information can be missed once not obtained at the time of examination. Therefore, obtaining and interpreting images should be done simultaneously, or, at least, all obtained images should be audited by the experienced cardiologist before releasing the patient from the examination booth. Because of the peculiar characteristics of echocardiography compared to CT or MRI, there have been incessant demands for the clinical decision support system(CDSS) for echocardiography. With the advance of Artificial Intelligence (AI), there have been several studies regarding decision support systems for echocardiography. The flow of these studies is divided into two approaches: One is the quantitative approach to segment the images and detects an abnormality in size and function. The other is the qualitative approach to detect abnormality in morphology. Unfortunately, most of these two studies have been conducted separately. However, since cardiologists perform quantitative and qualitative analysis simultaneously in analyzing echocardiography, an optimal CDSS needs to be a combination of these two approaches. From this point of view, this study aims to develop and validate an AI-based CDSS for echocardiograms through a large-scale retrospective cohort. Echocardiographic data of 2,600 patients who visited Seoul National University Hospital (1300 cardiac patients and 1300 non-cardiac patients with normal echocardiogram) between 2016 and 2021. Two networks were developed for the quantitative and qualitative analysis, and their usefulnesses were verified with the patient data. First, a U-net based deep learning network was developed for segmentation in the quantitative analysis. Annotated images by the experienced cardiologist with the left ventricle, interventricular septum, left ventricular posterior wall, right ventricle, aorta, and left atrium, were used for training. The diameters and areas of the six structures were obtained and vectorized from the segmentation images, and the frame information at the end-systolic and end-diastolic phases was extracted from the vector. The second network for the qualitative diagnosis was developed using a convolutional neural network (CNN) based on Resnet 152. The input data of this network was extracted from 10 frames of each patient based on end-diastolic and end-systolic phase information extracted from the quantitative network. The network not only distinguished the input data between normal and abnormal but also visualized the location of the abnormality on the image through the Gradient-weighted Class Activation Mapping (Grad-CAM) at the last layer. The performance of the quantitative network in the chamber size and function measurements was assessed in 1300 patients. Sensitivity and specificity were both over 90% except for pathologies related to the left ventricular posterior wall, interventricular septum, and aorta. The end-systolic and end-diastolic phase detection was also accurate, with an average difference of 0.52 frames for the end-systolic and 0.9 frames for the end-diastolic phases. In the case of the network for qualitative analysis, 10 input data were selected based on the phase information determined from the first network, and the results of 10 randomly selected images were compared. As a result, the accuracy was 90.3% and 81.2%, respectively, and the phase information selected from the first network contributed to the improvement of the performance of the network. Also, the results of Grad-CAM confirmed that the network trained with 10 images of data extracted based on the phase information from the first network displays the location of the lesion more accurately than the network trained with 10 randomly selected data. In conclusion, this study proposed an AI-based CDSS for echocardiography in the quantitative and qualitative analysis.ABSTRACT ๏ผ‘ CONTENTS v LIST OF TABLES vii LIST OF FIGURES viii CHAPTER 1 1 Introduction 1 1. Introduction 2 1.1. Echocardiogram 2 1.1.1. Diagnosis using Echocardiogram 2 1.1.2. Limitation in Echocardiogram 3 1.1.3. Artificial Intelligence in Echocardiogram 6 1.2. Clinical Background 7 1.2.1. Diagnostic Flow 8 1.2.2. Previous studies and clinical implication of this study 11 1.3. Technical Background 16 1.3.1. Convolutional Neural Network (CNN) 16 1.3.1.1. U-net 18 1.3.1.2. Residual Network 20 1.3.1.3. Gradient-weighted Class Activation Mapping (Grad-CAM) 22 1.4. Unmet Clinical Needs 26 1.5. Objective 27 CHAPTER 2 28 Materials & Methods 28 2. Materials & Methods 29 2.1. Data Description 29 2.2. Annotated Data 32 2.3. Overall Architecture 33 2.3.1. Quantitative Network 35 2.3.2. Qualitative Network 37 2.4. Dice Similarity Score 39 2.5. Intersection over Union 40 CHAPTER 3 41 3. Results & Discussion 42 3.1. Quantitative Network Result 42 3.1.1. Diagnostic results 47 3.1.2. Phase Detection Result 49 3.2. Qualitative Network Results 51 3.2.1. Grad-CAM Result 56 3.3. Limitation 58 3.3.1. Need for external dataset for generalizable network 58 3.3.2. Futurework of the system 59 CHAPTER 4 60 4. Conclusion 61 Abstract in Korean 62 Bibliography 65๋ฐ•

    Artificial intelligence and echocardiography

    Get PDF
    Echocardiography plays a crucial role in the diagnosis and management of cardiovascular disease. However, interpretation remains largely reliant on the subjective expertise of the operator. As a result inter-operator variability and experience can lead to incorrect diagnoses. Artificial intelligence (AI) technologies provide new possibilities for echocardiography to generate accurate, consistent and automated interpretation of echocardiograms, thus potentially reducing the risk of human error. In this review, we discuss a subfield of AI relevant to image interpretation, called machine learning, and its potential to enhance the diagnostic performance of echocardiography. We discuss recent applications of these methods and future directions for AI-assisted interpretation of echocardiograms. The research suggests it is feasible to apply machine learning models to provide rapid, highly accurate and consistent assessment of echocardiograms, comparable to clinicians. These algorithms are capable of accurately quantifying a wide range of features, such as the severity of valvular heart disease or the ischaemic burden in patients with coronary artery disease. However, the applications and their use are still in their infancy within the field of echocardiography. Research to refine methods and validate their use for automation, quantification and diagnosis are in progress. Widespread adoption of robust AI tools in clinical echocardiography practice should follow and have the potential to deliver significant benefits for patient outcome

    Automated assessment of echocardiographic image quality using deep convolutional neural networks

    Get PDF
    Myocardial ischemia tops the list of causes of death around the globe, but its diagnosis and early detection thrives on clinical echocardiography. Although echocardiography presents a huge advantage of a non-intrusive, low-cost point of care diagnosis, its image quality is inherently subjective with strong dependence on operatorsโ€™ experience level and acquisition skill. In some countries, echo specialists are mandated to supplementary years of training to achieve โ€˜gold standardโ€™ free-hand acquisition skill without which exacerbates the reliability of echocardiogram and increases possibility for misdiagnosis. These drawbacks pose significant challenges to adopting echocardiography as authoritative modalities for cardiac diagnosis. However, the prevailing and currently adopted solution is to manually carry out quality evaluation where an echocardiography specialist visually inspects several acquired images to make clinical decisions of its perceived quality and prognosis. This is a lengthening process and laced with variability of opinion consequently affection diagnostic responses. The goal of the research is to provide a multi-discipline, state-of-the-art solution that allows objective quality assessment of echocardiogram and to guarantee the reliability of clinical quantification processes. Computer graphic processing unit simulations, medical imaging analysis and deep convolutional neural network models were employed to achieve this goal. From a finite pool of echocardiographic patient datasets, 1650 random samples of echocardiogram cine-loops from different patients with age ranges from 17 and 85 years, who had undergone echocardiography between 2010 and 2020 were evaluated. We defined a set of pathological and anatomical criteria of image quality by which apical-four and parasternal long axis frames can be evaluated with feasibility for real-time optimization. The selected samples were annotated for multivariate model developments and validation of predicted quality score per frame. The outcome presents a robust artificial intelligence algorithm that indicate framesโ€™ quality rating, real-time visualisation of element of quality and updates quality optimization in real-time. A prediction errors of 0.052, 0.062, 0.069, 0.056 for visibility, clarity, depth-gain, and foreshortening attributes were achieved, respectively. The model achieved a combined error rate of 3.6% with average prediction speed of 4.24 ms per frame. The novel method established a superior approach to two-dimensional image quality estimation, assessment, and clinical adequacy on acquisition of echocardiogram prior to quantification and diagnosis of myocardial infarction

    Medical Image Analysis on Left Atrial LGE MRI for Atrial Fibrillation Studies: A Review

    Full text link
    Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of scars provide important information of the pathophysiology and progression of atrial fibrillation (AF). Hence, LA scar segmentation and quantification from LGE MRI can be useful in computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineation can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail, and summarize the validation strategies applied in each task. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review shows that the research into this topic is still in early stages. Although several methods have been proposed, especially for LA segmentation, there is still large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.Comment: 23 page

    A computer vision pipeline for fully automated echocardiogram interpretation

    Get PDF
    Cardiovascular disease is the leading cause of global mortality and continues to place a significant burden, in economic and resource terms, upon health services. A 2-dimensional transthoracic echocardiogram captures high spatial and temporal images and videos of the heart and is the modality of choice for the rapid assessment of heart function and structure due to itโ€™s non-invasive nature and lack of ionising radiation. The challenging process of analysing echocardiographic images is currently manually performed by trained experts, though this process is vulnerable to intra- and inter-observer variability and is highly time-consuming. Additionally, echocardiographic images suffer from varying degrees of noise and vary drastically in terms of image quality. Exponential advancements in the fields of artificial intelligence, deep learning and computer vision have enabled the rapid development of automated systems capable of high-precision tasks, often out-performing human experts. This thesis aims to investigate the applicability of applying deep learning methods to automate key processes in the modern echocardiographic laboratory. Namely, view classification, quality assessment, cardiac phase detection, segmentation of the left ventricle and keypoint detection on tissue Doppler imaging strips. State-of-the-art deep learning architectures were applied to each task, and evaluated against ground-truth annotations provided by trained experts. The datasets used throughout each Chapter are diverse and, in some cases, have been made public for the benefit of the research community. To encourage transparency and openness, all code and model weights have been published. Should automated deep learning systems, both online (in terms of providing real-time feedback) and offline (behind the scenes), become integrated within clinical practice, there is great potential for improved accuracy and efficiency, thus improving patient outcomes. Furthermore, health services could save valuable resources such as time and money

    Multi-modality cardiac image computing: a survey

    Get PDF
    Multi-modalityย cardiac imagingย plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing andย quantitative analysisย of multi-modality cardiac images could have a direct impact onย clinical researchย and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging inย cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For theย computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data,ย either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, andย catheter ablationย therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future
    corecore