5 research outputs found

    Artificial Intelligence and Echocardiography

    Get PDF
    Artificial intelligence (AI) is evolving in the field of diagnostic medical imaging, including echocardiography. Although the dynamic nature of echocardiography presents challenges beyond those of static images from X-ray, computed tomography, magnetic resonance, and radioisotope imaging, AI has influenced all steps of echocardiography, from image acquisition to automatic measurement and interpretation. Considering that echocardiography often is affected by inter-observer variability and shows a strong dependence on the level of experience, AI could be extremely advantageous in minimizing observer variation and providing reproducible measures, enabling accurate diagnosis. Currently, most reported AI applications in echocardiographic measurement have focused on improved image acquisition and automation of repetitive and tedious tasks; however, the role of AI applications should not be limited to conventional processes. Rather, AI could provide clinically important insights from subtle and non-specific data, such as changes in myocardial texture in patients with myocardial disease. Recent initiatives to develop large echocardiographic databases can facilitate development of AI applications. The ultimate goal of applying AI to echocardiography is automation of the entire process of echocardiogram analysis. Once automatic analysis becomes reliable, workflows in clinical echocardiographic will change radically. The human expert will remain the master controlling the overall diagnostic process, will not be replaced by AI, and will obtain significant support from AI systems to guide acquisition, perform measurements, and integrate and compare data on request.ope

    Automated multi-beat tissue Doppler echocardiography analysis using deep neural networks

    Get PDF
    Tissue Doppler imaging is an essential echocardiographic technique for the non-invasive assessment of myocardial blood velocity. Image acquisition and interpretation are performed by trained operators who visually localise landmarks representing Doppler peak velocities. Current clinical guidelines recommend averaging measurements over several heartbeats. However, this manual process is both time-consuming and disruptive to workflow. An automated system for accurate beat isolation and landmark identification would be highly desirable. A dataset of tissue Doppler images was annotated by three cardiologist experts, providing a gold standard and allowing for observer variability comparisons. Deep neural networks were trained for fully automated predictions on multiple heartbeats and tested on tissue Doppler strips of arbitrary length. Automated measurements of peak Doppler velocities show good Bland–Altman agreement (average standard deviation of 0.40 cm/s) with consensus expert values; less than the inter-observer variability (0.65 cm/s). Performance is akin to individual experts (standard deviation of 0.40 to 0.75 cm/s). Our approach allows for > 26 times as many heartbeats to be analysed, compared to a manual approach. The proposed automated models can accurately and reliably make measurements on tissue Doppler images spanning several heartbeats, with performance indistinguishable from that of human experts, but with significantly shorter processing time

    Multibeat echocardiographic phase detection using deep neural networks

    Get PDF
    Background Accurate identification of end-diastolic and end-systolic frames in echocardiographic cine loops is important, yet challenging, for human experts. Manual frame selection is subject to uncertainty, affecting crucial clinical measurements, such as myocardial strain. Therefore, the ability to automatically detect frames of interest is highly desirable. Methods We have developed deep neural networks, trained and tested on multi-centre patient data, for the accurate identification of end-diastolic and end-systolic frames in apical four-chamber 2D multibeat cine loop recordings of arbitrary length. Seven experienced cardiologist experts independently labelled the frames of interest, thereby providing infallible annotations, allowing for observer variability measurements. Results When compared with the ground-truth, our model shows an average frame difference of βˆ’0.09 Β± 1.10 and 0.11 Β± 1.29 frames for end-diastolic and end-systolic frames, respectively. When applied to patient datasets from a different clinical site, to which the model was blind during its development, average frame differences of βˆ’1.34 Β± 3.27 and βˆ’0.31 Β± 3.37 frames were obtained for both frames of interest. All detection errors fall within the range of inter-observer variability: [-0.87, βˆ’5.51]Β±[2.29, 4.26] and [-0.97, βˆ’3.46]Β±[3.67, 4.68] for ED and ES events, respectively. Conclusions The proposed automated model can identify multiple end-systolic and end-diastolic frames in echocardiographic videos of arbitrary length with performance indistinguishable from that of human experts, but with significantly shorter processing time

    μž„μƒμ˜μ‚¬ κ²°μ • 지원 μ‹œμŠ€ν…œμ„ μœ„ν•œ 심측 신경망 기반의 μ‹¬μ΄ˆμŒνŒŒ μžλ™ν•΄μ„μ— κ΄€ν•œ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ ν˜‘λ™κ³Όμ • λ°”μ΄μ˜€μ—”μ§€λ‹ˆμ–΄λ§μ „κ³΅, 2022. 8. 김희찬.μ‹¬μ΄ˆμŒνŒŒ κ²€μ‚¬λŠ” 심μž₯병 진단에 μ‚¬μš©λ˜λŠ” μ€‘μš”ν•œ 도ꡬ이며, μˆ˜μΆ•κΈ° 및 이완기 λ‹¨κ³„μ˜ 심μž₯ 이미지λ₯Ό μ œκ³΅ν•œλ‹€. μ‹¬μ΄ˆμŒνŒŒ 검사λ₯Ό 톡해 심방과 μ‹¬μ‹€μ˜ λ‹€μ–‘ν•œ ꡬ쑰적 이상과, νŒλ§‰ μ΄μƒλ“±μ˜ μ§ˆν™˜μ„ μ •λŸ‰μ μœΌλ‘œ λ˜λŠ” μ •μ„±μ μœΌλ‘œ 진단할 수 μžˆλ‹€. μ‹¬μ΄ˆμŒνŒŒ κ²€μ‚¬λŠ” λΉ„μΉ¨μŠ΅μ μΈ νŠΉμ„±μœΌλ‘œ μΈν•˜μ—¬μ— 심μž₯ μ „λ¬Έμ˜λ“€μ΄ 많이 μ‚¬μš©ν•˜κ³  있으며, 심μž₯ μ§ˆν™˜μžκ°€ 점점 λ§Žμ•„μ§€λŠ” 좔세에 따라 더 많이 μ‚¬μš©λ  κ²ƒμœΌλ‘œ κΈ°λŒ€λ˜κ³  μžˆλ‹€. μ‹¬μ΄ˆμŒνŒŒ κ²€μ‚¬λŠ” μ΄λŸ¬ν•œ μ•ˆμ „μ„±κ³Ό μœ μš©μ„±μ—λ„ λΆˆκ΅¬ν•˜κ³ , CTλ‚˜ MRIμ™€λŠ” 달리 1)μ •ν™•ν•œ μ˜μƒμ„ μ–»λŠ”λ° 였랜 ν›ˆλ ¨κΈ°κ°„μ΄ ν•„μš”ν•˜κ³  2) μ˜μƒμ„ 얻을 수 μžˆλŠ” λΆ€μœ„μ™€ 얻을 수 μž‡λŠ” λ‹¨λ©΄μ˜μƒμ΄ μ œν•œμ μ΄μ–΄μ„œ 검사 μ‹œ λ†“μΉœ μ†Œκ²¬μ€ μΆ”ν›„ μ˜μƒμ„ κ°μˆ˜ν•  κ²½μš°μ—λ„ λ°œκ²¬ν•  수 μ—†λŠ” νŠΉμ§•μ„ 가지고 μžˆλ‹€. 이에 닀라 μΈ‘μ •κ³Ό ν•΄μ„μ˜ μ •λŸ‰ν™”μ™€ ν•¨κ»˜ 검사상 μ΄μƒμ†Œκ²¬μ„ λ†“μΉ˜μ§€ μ•Šμ„ 수 μžˆλŠ” λ³΄μ™„μ‘°μΉ˜μ— λŒ€ν•œ μš”κ΅¬κ°€ λ§Žμ•˜κ³ , μ΄λŸ¬ν•œ μš”κ΅¬μ— λΆ€μ‘ν•˜μ—¬ 심μž₯μ „λ¬Έμ˜λ₯Ό μœ„ν•œ μž„μƒ μ˜μ‚¬κ²°μ • 지원 μ‹œμŠ€ν…œμ— λŒ€ν•œ λ§Žμ€ 연ꡬ가 μ§„ν–‰λ˜κ³  μžˆλ‹€.. 인곡지λŠ₯의 λ°œλ‹¬λ‘œ 인해 μ–΄λŠμ •λ„ μ΄λŸ¬ν•œ μš”κ΅¬μ— 뢀응할 수 있게 λ˜μ—ˆλ‹€. 이 μ—°κ΅¬μ˜ 흐름은 λ‘κ°€μ§€λ‘œ λ‚˜λ‰˜κ²Œ λ˜λŠ”λ°, μ²«μ§ΈλŠ” 심μž₯의 ꡬ쑰물듀을 λΆ„ν• ν•˜μ—¬ 크기λ₯Ό μΈ‘μ •ν•˜κ³  특이치λ₯Ό κ°μ§€ν•˜λŠ” μ •λŸ‰μ μΈ 연ꡬ방법과, 병변이 μ–΄λŠ λΆ€μœ„μ— μžˆλŠ”μ§€ 이미지 λ‚΄μ—μ„œ ν™•μΈν•˜λŠ” 정성적 μ ‘κ·Όλ²•μœΌλ‘œ λ‚˜λ‰œλ‹€. κΈ°μ‘΄μ—λŠ” 이 두 연ꡬ가 λŒ€λΆ€λΆ„ λ”°λ‘œ μ§„ν–‰λ˜μ–΄ μ™”μœΌλ‚˜, μž„μƒμ˜μ‚¬μ˜ 진단 흐름을 κ³ λ €ν•΄ λ³Ό λ•Œ 이 두가지 λͺ¨λ‘κ°€ ν¬ν•¨λ˜λŠ” μž„μƒ μ˜μ‚¬ κ²°μ • 지원 μ‹œμŠ€ν…œμ˜ 개발이 ν•„μš”ν•œ ν˜„μ‹€μ΄λ‹€. μ΄λŸ¬ν•œ κ΄€μ μ—μ„œ λ³Έ ν•™μœ„ λ…Όλ¬Έμ˜ λͺ©ν‘œλŠ” λŒ€κ·œλͺ¨ μ½”ν˜ΈνŠΈ ν›„ν–₯적 연ꡬλ₯Ό 톡해 AI 기반의 심μž₯ 초음파 μž„μƒ μ˜μ‚¬κ²°μ • 지원 μ‹œμŠ€ν…œμ„ κ°œλ°œν•˜κ³  κ²€μ¦ν•˜λŠ” 것이닀. λ°μ΄ν„°λŠ” 2016λ…„μ—μ„œ 2021년도 사이에 μ„œμšΈλŒ€ λ³‘μ›μ—μ„œ μ‹œν–‰λœ 2600예의 μ‹¬μ΄ˆμŒνŒŒκ²€μ‚¬ μ˜μƒ(μ •μƒμ†Œκ²¬1300λͺ…, λ³‘μ μ†Œκ²¬ 1300λͺ…)λ₯Ό μ΄μš©ν•˜μ˜€λ‹€. μ •λŸ‰μ λΆ„μ„κ³Ό 정성적 뢄석을 λͺ¨λ‘ κ³ λ €ν•˜κΈ° μœ„ν•΄ λ‘κ°œμ˜ λ„€νŠΈμ›Œν¬κ°€ κ°œλ°œλ˜μ—ˆμœΌλ©°, κ·Έ μœ νš¨μ„±μ€ ν™˜μž λ°μ΄ν„°λ‘œ κ²€μ¦λ˜μ—ˆλ‹€. λ¨Όμ € μ •λŸ‰μ  뢄석을 μœ„ν•œ 이미지 뢄할을 μœ„ν•΄ U-net 기반 λ”₯λŸ¬λ‹ λ„€νŠΈμ›Œν¬κ°€ κ°œλ°œλ˜μ—ˆμœΌλ©°, κ°œλ°œμ— ν•„μš”ν•œ 데이터λ₯Ό μœ„ν•΄ 심μž₯μ „λ¬Έμ˜κ°€ μ’Œμ‹¬μ‹€, μ’Œμ‹¬λ°©, λŒ€λ™λ§₯, μš°μ‹¬μ‹€, μ’Œμ‹¬μ‹€ ν›„λ²½ 및 심싀간 μ€‘κ²©μ˜ 정보λ₯Ό 이미지에 ν‘œμ‹œλ₯Ό ν•˜μ˜€λ‹€. ν›ˆλ ¨λœ λ„€νŠΈμ›Œν¬λ‘œλΆ€ν„° λ‚˜μ˜¨ μ΄λ―Έμ§€λ‘œλΆ€ν„° 6개의 ꡬ쑰물의 직경과 면적을 κ΅¬ν•˜μ—¬ 벑터화 ν•˜μ˜€μœΌλ©°, μˆ˜μΆ•κΈ°λ§ 및 이완기말 λ‹¨κ³„μ˜ ν”„λ ˆμž„ 정보λ₯Ό λ²‘ν„°λ‘œλΆ€ν„° μΆ”μΆœν•˜μ˜€λ‹€. λ‘˜μ§Έλ‘œ 정성적 진단을 μœ„ν•œ λ„€νŠΈμ›Œν¬ κ°œλ°œμ„ μœ„ν•΄ Resnet152 기반의 CNN을 μ‚¬μš©ν•˜μ˜€λ‹€. 이 λ„€νŠΈμ›Œν¬μ˜ μž…λ ₯λ°μ΄ν„°λŠ” μ •λŸ‰μ  λ„€νŠΈμ›Œν¬μ—μ„œ μΆ”μΆœλœ μˆ˜μΆ•κΈ°λ§ 및 이완기말 정보λ₯Ό 기반으둜 10ν”„λ ˆμž„μ΄ μΆ”μΆœλ˜μ—ˆλ‹€. μž…λ ₯데이터가 정상인지 μ•„λ‹Œμ§€ κ΅¬λΆ„ν•˜λ„λ‘ ν–ˆμ„ 뿐 μ•„λ‹ˆλΌ, λ§ˆμ§€λ§‰ λ ˆμ΄μ–΄μ—μ„œ κ·ΈλΌλ””μ–ΈνŠΈ 가쀑 클래슀 ν™œμ„±ν™” 맀핑(Grad-CAM)방법둠을 μ΄μš©ν•˜μ—¬ λ„€νŠΈμ›Œν¬κ°€ μ΄λ―Έμ§€μƒμ˜ μ–΄λŠ λΆ€μœ„λ₯Ό 보고 μ΄μƒμ†Œκ²¬μœΌλ‘œ λΆ„λ₯˜ν–ˆλŠ”지 μ‹œκ°ν™” ν•˜μ˜€λ‹€. κ·Έ κ²°κ³Ό λ¨Όμ € μ •λŸ‰μ  λ„€νŠΈμ›Œν¬ μ„±λŠ₯을 μΈ‘μ •ν•˜κΈ° μœ„ν•΄ ν™˜μž 1300λͺ…μ˜ 데이터λ₯Ό 톡해 각 ꡬ쑰물의 직경과 κ΄€λ ¨λœ 심μž₯μ§ˆν™˜μ΄ μ–Όλ§ˆλ‚˜ 잘 κ²€μΆœλλŠ”μ§€ ν™•μΈν•˜μ˜€λ‹€. 심싀쀑격, μ’Œμ‹¬μ‹€ ν›„λ²½, λŒ€λ™λ§₯κ³Ό κ΄€λ ¨λœ λ³‘μ μ†Œκ²¬μ„ μ œμ™Έν•˜κ³  λ‹€λ₯Έκ΅¬μ‘°λ¬Όμ˜ 민감도와 νŠΉμ΄μ„±μ€ λͺ¨λ‘ 90% 이상이닀. μˆ˜μΆ•κΈ° 말기 및 ν™•μž₯κΈ° 말기 μœ„μƒ κ²€μΆœλ„ μ •ν™•ν–ˆλŠ”λ°, 심μž₯μ „λ¬Έμ˜μ— μ˜ν•΄ μ„ νƒλœ ν”„λ ˆμž„μ— λΉ„ν•˜μ—¬ μˆ˜μΆ•κΈ° 말기의 경우 평균 0.52 ν”„λ ˆμž„, ν™•μž₯κΈ° 말기의 경우 0.9 ν”„λ ˆμž„μ˜ 차이λ₯Ό λ³΄μ˜€λ‹€. 정성뢄석을 μœ„ν•œ λ„€νŠΈμ›Œν¬μ˜ 경우, 첫 번째 λ„€νŠΈμ›Œν¬λ‘œλΆ€ν„° μ„ νƒλœ μœ„μƒμ •λ³΄λ₯Ό λ°”νƒ•μœΌλ‘œ 10개의 μž…λ ₯데이터λ₯Ό κ²°μ •ν•˜μ˜€κ³ , λ¬΄μž‘μœ„λ‘œ μ„ νƒλœ 10개의 κ²°κ³Όλ₯Ό λΉ„κ΅ν•˜μ˜€λ‹€. κ·Έ κ²°κ³Ό 정확도가 각각 90.33%, 81.16%둜 λ‚˜νƒ€λ‚¬μœΌλ©°, 1μ°¨ μ •λŸ‰μ  λ„€νŠΈμ›Œν¬ μ—μ„œ μΆ”μΆœλœ μˆ˜μΆ•κΈ°λ§, 이완기말 ν”„λ ˆμž„ μ •λ³΄λŠ” ν™˜μžλ₯Ό νŒλ³„ν•˜λŠ” λ„€νŠΈμ›Œν¬μ˜ μ„±λŠ₯ ν–₯상에 κΈ°μ—¬ν–ˆμŒμ„ μ•Œ 수 μžˆλ‹€. λ˜ν•œ Grad-CAM κ²°κ³ΌλŠ” 첫 번째 λ„€νŠΈμ›Œν¬μ˜ ν”„λ ˆμž„ 정보λ₯Ό 기반으둜 λ°μ΄ν„°μ—μ„œ μΆ”μΆœλœ10 μž₯의 이미지가 μž…λ ₯λ°μ΄ν„°λ‘œ μ“°μ˜€μ„ λ•Œκ°€ λ¬΄μž‘μœ„λ‘œ μΆ”μΆœλœ 10μž₯의 μ΄λ―Έμ§€λ‘œ ν›ˆλ ¨λœ λ„€νŠΈμ›Œν¬ 보닀 λ³‘λ³€μ˜ μœ„μΉ˜λ₯Ό 더 μ •ν™•ν•˜κ²Œ ν‘œμ‹œν•˜λŠ” 것을 ν™•μΈν•˜μ˜€λ‹€. 결둠적으둜 λ³Έ μ—°κ΅¬λŠ” μ •λŸ‰μ , 정성적 뢄석을 μœ„ν•œ AI 기반 심μž₯ 초음파 μž„μƒμ˜μ‚¬ κ²°μ • 지원 μ‹œμŠ€ν…œμ„ κ°œλ°œν•˜μ˜€μœΌλ©°, 이 μ‹œμŠ€ν…œμ΄ μ‹€ν˜„ κ°€λŠ₯ν•œ κ²ƒμœΌλ‘œ κ²€μ¦λ˜μ—ˆλ‹€.Echocardiography is an indispensable tool for cardiologists in the diagnosis of heart diseases. By echocardiography, various structural abnormalities in the heart can be quantitatively or qualitatively diagnosed. Due to its non-invasiveness, the usage of echocardiography in the diagnosis of heart disease has continuously increased. Despite the increasing role in the cardiology practice, echocardiography requires experience in capturing and knowledge in interpreting images. . Moreover, in contrast to CT or MRI images, important information can be missed once not obtained at the time of examination. Therefore, obtaining and interpreting images should be done simultaneously, or, at least, all obtained images should be audited by the experienced cardiologist before releasing the patient from the examination booth. Because of the peculiar characteristics of echocardiography compared to CT or MRI, there have been incessant demands for the clinical decision support system(CDSS) for echocardiography. With the advance of Artificial Intelligence (AI), there have been several studies regarding decision support systems for echocardiography. The flow of these studies is divided into two approaches: One is the quantitative approach to segment the images and detects an abnormality in size and function. The other is the qualitative approach to detect abnormality in morphology. Unfortunately, most of these two studies have been conducted separately. However, since cardiologists perform quantitative and qualitative analysis simultaneously in analyzing echocardiography, an optimal CDSS needs to be a combination of these two approaches. From this point of view, this study aims to develop and validate an AI-based CDSS for echocardiograms through a large-scale retrospective cohort. Echocardiographic data of 2,600 patients who visited Seoul National University Hospital (1300 cardiac patients and 1300 non-cardiac patients with normal echocardiogram) between 2016 and 2021. Two networks were developed for the quantitative and qualitative analysis, and their usefulnesses were verified with the patient data. First, a U-net based deep learning network was developed for segmentation in the quantitative analysis. Annotated images by the experienced cardiologist with the left ventricle, interventricular septum, left ventricular posterior wall, right ventricle, aorta, and left atrium, were used for training. The diameters and areas of the six structures were obtained and vectorized from the segmentation images, and the frame information at the end-systolic and end-diastolic phases was extracted from the vector. The second network for the qualitative diagnosis was developed using a convolutional neural network (CNN) based on Resnet 152. The input data of this network was extracted from 10 frames of each patient based on end-diastolic and end-systolic phase information extracted from the quantitative network. The network not only distinguished the input data between normal and abnormal but also visualized the location of the abnormality on the image through the Gradient-weighted Class Activation Mapping (Grad-CAM) at the last layer. The performance of the quantitative network in the chamber size and function measurements was assessed in 1300 patients. Sensitivity and specificity were both over 90% except for pathologies related to the left ventricular posterior wall, interventricular septum, and aorta. The end-systolic and end-diastolic phase detection was also accurate, with an average difference of 0.52 frames for the end-systolic and 0.9 frames for the end-diastolic phases. In the case of the network for qualitative analysis, 10 input data were selected based on the phase information determined from the first network, and the results of 10 randomly selected images were compared. As a result, the accuracy was 90.3% and 81.2%, respectively, and the phase information selected from the first network contributed to the improvement of the performance of the network. Also, the results of Grad-CAM confirmed that the network trained with 10 images of data extracted based on the phase information from the first network displays the location of the lesion more accurately than the network trained with 10 randomly selected data. In conclusion, this study proposed an AI-based CDSS for echocardiography in the quantitative and qualitative analysis.ABSTRACT οΌ‘ CONTENTS v LIST OF TABLES vii LIST OF FIGURES viii CHAPTER 1 1 Introduction 1 1. Introduction 2 1.1. Echocardiogram 2 1.1.1. Diagnosis using Echocardiogram 2 1.1.2. Limitation in Echocardiogram 3 1.1.3. Artificial Intelligence in Echocardiogram 6 1.2. Clinical Background 7 1.2.1. Diagnostic Flow 8 1.2.2. Previous studies and clinical implication of this study 11 1.3. Technical Background 16 1.3.1. Convolutional Neural Network (CNN) 16 1.3.1.1. U-net 18 1.3.1.2. Residual Network 20 1.3.1.3. Gradient-weighted Class Activation Mapping (Grad-CAM) 22 1.4. Unmet Clinical Needs 26 1.5. Objective 27 CHAPTER 2 28 Materials & Methods 28 2. Materials & Methods 29 2.1. Data Description 29 2.2. Annotated Data 32 2.3. Overall Architecture 33 2.3.1. Quantitative Network 35 2.3.2. Qualitative Network 37 2.4. Dice Similarity Score 39 2.5. Intersection over Union 40 CHAPTER 3 41 3. Results & Discussion 42 3.1. Quantitative Network Result 42 3.1.1. Diagnostic results 47 3.1.2. Phase Detection Result 49 3.2. Qualitative Network Results 51 3.2.1. Grad-CAM Result 56 3.3. Limitation 58 3.3.1. Need for external dataset for generalizable network 58 3.3.2. Futurework of the system 59 CHAPTER 4 60 4. Conclusion 61 Abstract in Korean 62 Bibliography 65λ°•
    corecore