27 research outputs found

    An Iterative Spanning Forest Framework for Superpixel Segmentation

    Full text link
    Superpixel segmentation has become an important research problem in image processing. In this paper, we propose an Iterative Spanning Forest (ISF) framework, based on sequences of Image Foresting Transforms, where one can choose i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure to generate improved sets of connected superpixels (supervoxels in 3D) per iteration. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF methods to illustrate different choices of its components. These methods are compared with approaches from the state-of-the-art in effectiveness and efficiency. The experiments involve 2D and 3D datasets with distinct characteristics, and a high level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show that some of its methods are competitive with or superior to the best baselines in effectiveness and efficiency

    RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans

    Full text link
    Automatic extraction of liver and tumor from CT volumes is a challenging task due to their heterogeneous and diffusive shapes. Recently, 2D and 3D deep convolutional neural networks have become popular in medical image segmentation tasks because of the utilization of large labeled datasets to learn hierarchical features. However, 3D networks have some drawbacks due to their high cost on computational resources. In this paper, we propose a 3D hybrid residual attention-aware segmentation method, named RA-UNet, to precisely extract the liver volume of interests (VOI) and segment tumors from the liver VOI. The proposed network has a basic architecture as a 3D U-Net which extracts contextual information combining low-level feature maps with high-level ones. Attention modules are stacked so that the attention-aware features change adaptively as the network goes "very deep" and this is made possible by residual learning. This is the first work that an attention residual mechanism is used to process medical volumetric images. We evaluated our framework on the public MICCAI 2017 Liver Tumor Segmentation dataset and the 3DIRCADb dataset. The results show that our architecture outperforms other state-of-the-art methods. We also extend our RA-UNet to brain tumor segmentation on the BraTS2018 and BraTS2017 datasets, and the results indicate that RA-UNet achieves good performance on a brain tumor segmentation task as well

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    Efficient extraction of semantic information from medical images in large datasets using random forests

    No full text
    Large datasets of unlabelled medical images are increasingly becoming available; however only a small subset tend to be manually semantically labelled as it is a tedious and extremely time-consuming task to do for large datasets. This thesis aims to tackle the problem of efficiently extracting semantic information in the form of image segmentations and organ localisations from large datasets of unlabelled medical images. To do so, we investigate the suitability of supervoxels and random classification forests for the task. The first contribution of this thesis is a novel method for efficiently estimating coarse correspondences between pairs of images that can handle difficult cases that exhibit large variations in fields of view. The proposed methods adapts the random forest framework, which is a supervised learning algorithm, to work in an unsupervised manner by automatically generating labels for training via the use of supervoxels. The second contribution of this thesis is a method that extends our first contribution so as to be applicable efficiently on a large dataset of images. The proposed method is efficient and can be used to obtain correspondences between a large number of object-like supervoxels that are representative of organ structures in the images. The method is evaluated for the applications of organ-based image retrieval and weakly-supervised image segmentation using extremely minimal user input. While the method does not achieve image segmentation accuracies for all organs in an abdominal CT dataset compared to current fully-supervised state-of-the-art methods, it does provide a promising way for efficiently extracting and parsing a large dataset of medical images for the purpose of further processing.Open Acces

    GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    Get PDF
    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094

    The Liver Tumor Segmentation Benchmark (LiTS)

    Get PDF
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.Bjoern Menze is supported through the DFG funding (SFB 824, subproject B12) and a Helmut-Horten-Professorship for Biomedical Informatics by the Helmut-Horten-Foundation. Florian Kofler is Supported by Deutsche Forschungsgemeinschaft (DFG) through TUM International Graduate School of Science and Engineering (IGSSE), GSC 81. An Tang was supported by the Fonds de recherche du Québec en Santé and Fondation de l’association des radiologistes du Québec (FRQS- ARQ 34939 Clinical Research Scholarship – Junior 2 Salary Award). Hongwei Bran Li is supported by Forschungskredit (Grant NO. FK-21- 125) from University of Zurich.Peer ReviewedArticle signat per 109 autors/es: Patrick Bilic 1,a,b, Patrick Christ 1,a,b, Hongwei Bran Li 1,2,∗,b, Eugene Vorontsov 3,a,b, Avi Ben-Cohen 5,a, Georgios Kaissis 10,12,15,a, Adi Szeskin 18,a, Colin Jacobs 4,a, Gabriel Efrain Humpire Mamani 4,a, Gabriel Chartrand 26,a, Fabian Lohöfer 12,a, Julian Walter Holch 29,30,69,a, Wieland Sommer 32,a, Felix Hofmann 31,32,a, Alexandre Hostettler 36,a, Naama Lev-Cohain 38,a, Michal Drozdzal 34,a, Michal Marianne Amitai 35,a, Refael Vivanti 37,a, Jacob Sosna 38,a, Ivan Ezhov 1, Anjany Sekuboyina 1,2, Fernando Navarro 1,76,78, Florian Kofler 1,13,57,78, Johannes C. Paetzold 15,16, Suprosanna Shit 1, Xiaobin Hu 1, Jana Lipková 17, Markus Rempfler 1, Marie Piraud 57,1, Jan Kirschke 13, Benedikt Wiestler 13, Zhiheng Zhang 14, Christian Hülsemeyer 1, Marcel Beetz 1, Florian Ettlinger 1, Michela Antonelli 9, Woong Bae 73, Míriam Bellver 43, Lei Bi 61, Hao Chen 39, Grzegorz Chlebus 62,64, Erik B. Dam 72, Qi Dou 41, Chi-Wing Fu 41, Bogdan Georgescu 60, Xavier Giró-i-Nieto 45, Felix Gruen 28, Xu Han 77, Pheng-Ann Heng 41, Jürgen Hesser 48,49,50, Jan Hendrik Moltz 62, Christian Igel 72, Fabian Isensee 69,70, Paul Jäger 69,70, Fucang Jia 75, Krishna Chaitanya Kaluva 21, Mahendra Khened 21, Ildoo Kim 73, Jae-Hun Kim 53, Sungwoong Kim 73, Simon Kohl 69, Tomasz Konopczynski 49, Avinash Kori 21, Ganapathy Krishnamurthi 21, Fan Li 22, Hongchao Li 11, Junbo Li 8, Xiaomeng Li 40, John Lowengrub 66,67,68, Jun Ma 54, Klaus Maier-Hein 69,70,7, Kevis-Kokitsi Maninis 44, Hans Meine 62,65, Dorit Merhof 74, Akshay Pai 72, Mathias Perslev 72, Jens Petersen 69, Jordi Pont-Tuset 44, Jin Qi 56, Xiaojuan Qi 40, Oliver Rippel 74, Karsten Roth 47, Ignacio Sarasua 51,12, Andrea Schenk 62,63, Zengming Shen 59,60, Jordi Torres 46,43, Christian Wachinger 51,12,1, Chunliang Wang 42, Leon Weninger 74, Jianrong Wu 25, Daguang Xu 71, Xiaoping Yang 55, Simon Chun-Ho Yu 58, Yading Yuan 52, Miao Yue 20, Liping Zhang 58, Jorge Cardoso 9, Spyridon Bakas 19,23,24, Rickmer Braren 6,12,30,a, Volker Heinemann 33,a, Christopher Pal 3,a, An Tang 27,a, Samuel Kadoury 3,a, Luc Soler 36,a, Bram van Ginneken 4,a, Hayit Greenspan 5,a, Leo Joskowicz 18,a, Bjoern Menze 1,2,a // 1 Department of Informatics, Technical University of Munich, Germany; 2 Department of Quantitative Biomedicine, University of Zurich, Switzerland; 3 Ecole Polytechnique de Montréal, Canada; 4 Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; 5 Department of Biomedical Engineering, Tel-Aviv University, Israel; 6 German Cancer Consortium (DKTK), Germany; 7 Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; 8 Philips Research China, Philips China Innovation Campus, Shanghai, China; 9 School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK; 10 Institute for AI in Medicine, Technical University of Munich, Germany; 11 Department of Computer Science, Guangdong University of Foreign Studies, China; 12 Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; 13 Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; 14 Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China; 15 Department of Computing, Imperial College London, London, United Kingdom; 16 Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany; 17 Brigham and Women’s Hospital, Harvard Medical School, USA; 18 School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel; 19 Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; 20 CGG Services (Singapore) Pte. Ltd., Singapore; 21 Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India; 22 Sensetime, Shanghai, China; 23 Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; 24 Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA; 25 Tencent Healthcare (Shenzhen) Co., Ltd, China; 26 The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada; 27 Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada; 28 Institute of Control Engineering, Technische Universität Braunschweig, Germany; 29 Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; 30 Comprehensive Cancer Center Munich, Munich, Germany; 31 Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; 32 Department of Radiology, University Hospital, LMU Munich, Germany; 33 Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany; 34 Polytechnique Montréal, Mila, QC, Canada; 35 Department of Diagnostic Radiology, Sheba Medical Center, Tel Aviv university, Israel; 36 Department of Surgical Data Science, Institut de Recherche contre les Cancers de l’Appareil Digestif (IRCAD), France; 37 Rafael Advanced Defense System, Israel; 38 Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel; 39 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China; 40 Department of Electrical and Electronic Engineering, The University of Hong Kong, China; 41 Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; 42 Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden; 43 Barcelona Supercomputing Center, Barcelona, Spain; 44 Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland; 45 Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain; 46 Universitat Politecnica de Catalunya, Catalonia, Spain; 47 University of Tuebingen, Germany; 48 Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; 49 Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; 50 Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany; 51 Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany; 52 Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA; 53 Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea; 54 Department of Mathematics, Nanjing University of Science and Technology, China; 55 Department of Mathematics, Nanjing University, China; 56 School of Information and Communication Engineering, University of Electronic Science and Technology of China, China; 57 Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; 58 Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China; 59 Beckman Institute, University of Illinois at Urbana-Champaign, USA; 60 Siemens Healthineers, USA; 61 School of Computer Science, the University of Sydney, Australia; 62 Fraunhofer MEVIS, Bremen, Germany; 63 Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany; 64 Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands; 65 Medical Image Computing Group, FB3, University of Bremen, Germany; 66 Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; 67 Center for Complex Biological Systems, University of California, Irvine, USA; 68 Chao Family Comprehensive Cancer Center, University of California, Irvine, USA; 69 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; 70 Helmholtz Imaging, Germany; 71 NVIDIA, Santa Clara, CA, USA; 72 Department of Computer Science, University of Copenhagen, Denmark; 73 Kakao Brain, Republic of Korea; 74 Institute of Imaging & Computer Vision, RWTH Aachen University, Germany; 75 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China; 76 Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; 77 Department of computer science, UNC Chapel Hill, USA; 78 TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, GermanyPostprint (published version

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Fully Automatic Intervertebral Disc Segmentation Using Multimodal 3D U-Net

    Full text link
    Intervertebral discs (IVDs), as small joints lying between adjacent vertebrae, have played an important role in pressure buffering and tissue protection. The fully-automatic localization and segmentation of IVDs have been discussed in the literature for many years since they are crucial to spine disease diagnosis and provide quantitative parameters in the treatment. Traditionally hand-crafted features are derived based on image intensities and shape priors to localize and segment IVDs. With the advance of deep learning, various neural network models have gained great success in image analysis including the recognition of intervertebral discs. Particularly, U-Net stands out among other approaches due to its outstanding performance on biomedical images with a relatively small set of training data. This paper proposes a novel convolutional framework based on 3D U-Net to segment IVDs from multi-modality MRI images. We first localize the centers of intervertebral discs in each spine sample and then train the network based on the cropped small volumes centered at the localized intervertebral discs. A detailed comprehensive analysis of the results using various combinations of multi-modalities is presented. Furthermore, experiments conducted on 2D and 3D U-Nets with augmented and non-augmented datasets are demonstrated and compared in terms of Dice coefficient and Hausdorff distance. Our method has proved to be effective with a mean segmentation Dice coefficient of 89.0% and a standard deviation of 1.4%

    Автоматическое определение плотности печени по данным компьютерной томографии и ультранизкодозной компьютерной томографии

    Get PDF
    РЕНТГЕНОЛОГИЧЕСКИЕ ИССЛЕДОВАНИЯТОМОГРАФИЯ РЕНТГЕНОВСКАЯ КОМПЬЮТЕРНАЯ /ИСПТОМОГРАФИЯ ТРАНСМИССИОННАЯ КОМПЬЮТЕРНАЯ /ИСПТОМОДЕНСИТОМЕТРИЯ /ИСПУЛЬТРАНИЗКОДОЗНАЯ КОМПЬЮТЕРНАЯ ТОМОГРАФИЯ /ИСППЕЧЕНЬ /РЕНТГЕНОГРСТЕАТОЗ ПЕЧЕНИ /ДИАГННЕАЛКОГОЛЬНАЯ ЖИРОВАЯ БОЛЕЗНЬ ПЕЧЕНИ /ДИАГНПЛОТНОСТЬ ПЕЧЕНИРЕТРОСПЕКТИВНЫЕ ИССЛЕДОВАНИЯЦель. Оценить возможности разработанного метода автоматического определения плотности печени по данным нативны хультранизкодозных и стандартных компьютерных томограмм органов грудной клетки, при которых в область сканирования попадает верхний этаж брюшной полости. Материал и методы. Проведен ретроспективный анализ данных 10000 пациентов, которым была выполнена ультранизкодозная компьютерная томография. Из них отобраны 100 пациентов, дополнительно прошедших стандартную компьютерную томографию. Средний возраст пациентов: 62,5±12 лет (M±сигма). Ручное измерение плотности печени проводилось во II, IV, VII-VIII сегментах. Дополнительно измерялась плотность селезенки. Стеатоз печени считался достоверным при плотности печени <40 HU, при отношения плотностей печени и селезенки <0,8-1,0, при разнице их плотностей<10 HU. Для автоматического измерения была разработана программа определения плотности печени, включающая сегментацию и определение плотности сегментированной области. Результаты. При сравнении показателей плотности печени для стандартной компьютерной томографии, полученных автоматическим и ручным методами, выявлена незначительная разница (51,43 и 50,37 HU, p=0,0192). Для ультранизкодозной компьютерной томографии различие несколько больше (54,90 и 55,60 HU, p=0,310). При оценке разницы сравниваемых методов для стандартной и ультранизкодозной компьютерной томографии существенных различий не выявлено (р=0,0035). Автоматический метод относительно ручного выявляет большее количество случаев пониженной плотности печени для стандартной компьютерной томографии (10 и 6 случаев, P(McNemar)=0.125) и для ультранизкодозной (11 и 5 случаев, P(McNemar)=0.0313). Согласие между двумя методами удовлетворительное для обоих протоколов сканирования (kappa 0,726 и 0,593). Заключение. Хорошая корреляция ручного и автоматического методов для стандартной и ультранизкодозной компьютерной томографии позволяет использовать автоматический метод для анализа большого объема данных и выявления стеатоза печени.Objective. To evaluate the possibilities of the developed method for the automatic liver density measurements according to the data of native ultra-low-dose and standard chest computed tomograms in the case when an upper segment of the abdomen is in the scanned zone. Methods. Retrospective analysis of clinical data associated with patients (n=10,000) underwent ultra-low-dose computed tomography has been performed. The patients (n=100) were selected and additionally underwent standard computed tomography. The average age of patients was 62.5±12 years (M±sigma). Manual measurement of the liver density was carried out in II, IV, VII-VIII segments. In addition the splenic density was measured. In the case of the liver density was <40 HU, liver-to-spleen ratio (L/S) <0.8-1.0, and the density difference was <10 HU hepatic steatosis was considered to be reliable. For automatic procedure a program for measurement of liver density including segmentation and the segmented density area was developed. Results. A little difference was revealed in comparison of the automated and manual liver density measurement for standard computed tomography (51.43 vs. 50,37 HU, p=0.0192). For ultra-low-dose computed tomography the difference is slightly larger (54.90 and 55.60 HU, p=0.310). When assessing the difference between the compared methods for standard and ultra-low-dose computed tomography, no significant difference was found (p=0.0035). In comparison of manual and automated methods a larger number of the low liver density cases both for standard (10 vs. 6 cases, P(McNemar)=0.125) and ultra-low-dose tomograms (11 vs. 5 cases, P(McNemar)=0.0313) was detected. The agreement between two methods is considered to be satisfactory for both scanning protocols (kappa 0.726 vs. 0.593). Conclusion. A good correlation between manual and automated methods for standard and ultra-low-dose computed tomography allows using the automatic method for analyzing a large amount of data and revealing the hepatic steatosis
    corecore