6,536 research outputs found
Automatic Emphysema Detection using Weakly Labeled HRCT Lung Images
A method for automatically quantifying emphysema regions using
High-Resolution Computed Tomography (HRCT) scans of patients with chronic
obstructive pulmonary disease (COPD) that does not require manually annotated
scans for training is presented. HRCT scans of controls and of COPD patients
with diverse disease severity are acquired at two different centers. Textural
features from co-occurrence matrices and Gaussian filter banks are used to
characterize the lung parenchyma in the scans. Two robust versions of multiple
instance learning (MIL) classifiers, miSVM and MILES, are investigated. The
classifiers are trained with the weak labels extracted from the forced
expiratory volume in one minute (FEV) and diffusing capacity of the lungs
for carbon monoxide (DLCO). At test time, the classifiers output a patient
label indicating overall COPD diagnosis and local labels indicating the
presence of emphysema. The classifier performance is compared with manual
annotations by two radiologists, a classical density based method, and
pulmonary function tests (PFTs). The miSVM classifier performed better than
MILES on both patient and emphysema classification. The classifier has a
stronger correlation with PFT than the density based method, the percentage of
emphysema in the intersection of annotations from both radiologists, and the
percentage of emphysema annotated by one of the radiologists. The correlation
between the classifier and the PFT is only outperformed by the second
radiologist. The method is therefore promising for facilitating assessment of
emphysema and reducing inter-observer variability.Comment: Accepted at PLoS ON
Optimization with artificial intelligence in additive manufacturing: a systematic review
In situations requiring high levels of customization and limited production volumes, additive manufacturing (AM) is a frequently utilized technique with several benefits. To properly configure all the parameters required to produce final goods of the utmost quality, AM calls for qualified designers and experienced operators. This research demonstrates how, in this scenario, artificial intelligence (AI) could significantly enable designers and operators to enhance additive manufacturing. Thus, 48 papers have been selected from the comprehensive collection of research using a systematic literature review to assess the possibilities that AI may bring to AM. This review aims to better understand the current state of AI methodologies that can be applied to optimize AM technologies and the potential future developments and applications of AI algorithms in AM. Through a detailed discussion, it emerges that AI might increase the efficiency of the procedures associated with AM, from simulation optimization to in-process monitoring
Image analysis for classification of dysplasia in Barrett’s esophagus using endoscopic optical coherence tomography
Barrett’s esophagus (BE) and associated adenocarcinoma have emerged as a major health care problem. Endoscopic optical coherence tomography is a microscopic sub-surface imaging technology that has been shown to differentiate tissue layers of the gastrointestinal wall and identify dysplasia in the mucosa, and is proposed as a surveillance tool to aid in management of BE. In this work a computer-aided diagnosis (CAD) system has been demonstrated for classification of dysplasia in Barrett’s esophagus using EOCT. The system is composed of four modules: region of interest segmentation, dysplasia-related image feature extraction, feature selection, and site classification and validation. Multiple feature extraction and classification methods were evaluated and the process of developing the CAD system is described in detail. Use of multiple EOCT images to classify a single site was also investigated. A total of 96 EOCT image-biopsy pairs (63 non-dysplastic, 26 low-grade and 7 high-grade dysplastic biopsy sites) from a previously described clinical study were analyzed using the CAD system, yielding an accuracy of 84% for classification of non-dysplastic vs. dysplastic BE tissue. The results motivate continued development of CAD to potentially enable EOCT surveillance of large surface areas of Barrett’s mucosa to identify dysplasia
Computer aided detection of oral lesions on CT images
Oral lesions are important findings on computed tomography images. They are difficult to detect on CT images because of low contrast, arbitrary orientation of objects, complicated topology and lack of clear lines indicating lesions. In this thesis, a fully automatic method to detect oral lesions from dental CT images is proposed to identify (1) Closed boundary lesions and (2) Bone deformation lesions. Two algorithms were developed to recognize these two types of lesions, which cover most of the lesion types that can be found on CT images. The results were validated using a dataset of 52 patients. Using non training dataset, closed boundary lesion detection algorithm yielded 71% sensitivity with 0.31 false positives per patient. Moreover, bone deformation lesion detection algorithm achieved 100% sensitivity with 0.13 false positives per patient. Results suggest that, the proposed framework has the potential to be used in clinical context, and assist radiologists for better diagnosis. --Abstract, page iv
Liver Segmentation and Liver Cancer Detection Based on Deep Convolutional Neural Network: A Brief Bibliometric Survey
Background: This study analyzes liver segmentation and cancer detection work, with the perspectives of machine learning and deep learning and different image processing techniques from the year 2012 to 2020. The study uses different Bibliometric analysis methods.
Methods: The articles on the topic were obtained from one of the most popular databases- Scopus. The year span for the analysis is considered to be from 2012 to 2020. Scopus analyzer facilitates the analysis of the databases with different categories such as documents by source, year, and county and so on. Analysis is also done by using different units of analysis such as co-authorship, co-occurrences, citation analysis etc. For this analysis Vosviewer Version 1.6.15 is used.
Results: In the study, a total of 518 articles on liver segmentation and liver cancer were obtained between the years 2012 to 2020. From the statistical analysis and network analysis it can be concluded that, the maximum articles are published in the year 2020 with China is the highest contributor followed by United States and India.
Conclusions: Outcome from Scoups database is 518 articles with English language has the largest number of articles. Statistical analysis is done in terms of different parameters such as Authors, documents, country, affiliation etc. The analysis clearly indicates the potential of the topic. Network analysis of different parameters is also performed. This also indicate that there is a lot of scope for further research in terms of advanced algorithms of computer vision, deep learning and machine learning
Partially Supervised Multi-Task Network for Single-View Dietary Assessment
Food volume estimation is an essential step in the pipeline of dietary
assessment and demands the precise depth estimation of the food surface and
table plane. Existing methods based on computer vision require either
multi-image input or additional depth maps, reducing convenience of
implementation and practical significance. Despite the recent advances in
unsupervised depth estimation from a single image, the achieved performance in
the case of large texture-less areas needs to be improved. In this paper, we
propose a network architecture that jointly performs geometric understanding
(i.e., depth prediction and 3D plane estimation) and semantic prediction on a
single food image, enabling a robust and accurate food volume estimation
regardless of the texture characteristics of the target plane. For the training
of the network, only monocular videos with semantic ground truth are required,
while the depth map and 3D plane ground truth are no longer needed.
Experimental results on two separate food image databases demonstrate that our
method performs robustly on texture-less scenarios and is superior to
unsupervised networks and structure from motion based approaches, while it
achieves comparable performance to fully-supervised methods
- …