308,377 research outputs found
Possible strategies for use of artificial intelligence in screen-reading of mammograms, based on retrospective data from 122,969 screening examinations
Objectives Artificial intelligence (AI) has shown promising results when used on retrospective data from mammographic screening. However, few studies have explored the possible consequences of different strategies for combining AI and radiologists in screen-reading. Methods A total of 122,969 digital screening examinations performed between 2009 and 2018 in BreastScreen Norway were retrospectively processed by an AI system, which scored the examinations from 1 to 10; 1 indicated low suspicion of malignancy and 10 high suspicion. Results were merged with information about screening outcome and used to explore consensus, recall, and cancer detection for 11 different scenarios of combining AI and radiologists. Results Recall was 3.2%, screen-detected cancer 0.61% and interval cancer 0.17% after independent double reading and served as reference values. In a scenario where examinations with AI scores 1–5 were considered negative and 6–10 resulted in standard independent double reading, the estimated recall was 2.6% and screen-detected cancer 0.60%. When scores 1–9 were considered negative and score 10 double read, recall was 1.2% and screen-detected cancer 0.53%. In these two scenarios, potential rates of screen-detected cancer could be up to 0.63% and 0.56%, if the interval cancers selected for consensus were detected at screening. In the former scenario, screen-reading volume would be reduced by 50%, while the latter would reduce the volume by 90%. Conclusion Several theoretical scenarios with AI and radiologists have the potential to reduce the volume in screen-reading without affecting cancer detection substantially. Possible influence on recall and interval cancers must be evaluated in prospective studies. Key Points Different scenarios using artificial intelligence in combination with radiologists could reduce the screen-reading volume by 50% and result in a rate of screen-detected cancer ranging from 0.59% to 0.60%, compared to 0.61% after standard independent double reading The use of artificial intelligence in combination with radiologists has the potential to identify negative screening examinations with high precision in mammographic screening and to reduce the rate of interval cancer</li
Synthetic geomechanical logs and distributions for marcellus shale
The intent of this study is to generate synthetic Geomechanical Logs for a specific Marcellus Shale asset using Artificial Intelligence and Data mining Technology. Geomechanical Distributions (Map and Volume) for the entire Marcellus Shale asset was completed. In order to accomplish the objectives, conventional well logs such as Gamma Ray and Bulk Density are used to build Data-Driven models. The Data-Driven technique used in this study is applicable to other shale reservoirs.;Successful recovery of hydrocarbons from the reservoirs, notably shale, is attributed to realizing the key fundamentals of reservoir rock properties. Having adequate and sufficient information regarding the variable lithology and mineralogy is crucial in order to identify the right pay-zone intervals for shale gas production. In addition, contribution of mechanical properties (Principal stress profiles) of shale to hydraulic fracturing strategies is a well-understood concept. It may also contribute to better, more accurate simulation models of production from shale gas reservoirs.;In this study, synthetic Geomechanical logs (Including following properties: Poisson\u27s Ratio, Total Minimum Horizontal Stress, Bulk and Shear Modulus, etc.) are developed for more than 50 Marcellus Shale wells. Using Artificial Intelligence and Data Mining (AI&DM), data-driven models are developed that are capable of generating synthetic Geomechanical logs from conventional logs such as Gamma Ray and Density Porosity. The data-driven models are validated using wells with actual Geomechanical logs that have been removed from the database to serve as blind validation wells. In addition, having access to necessary data to building Geomechanical distributions (Map and Volume) model can assist in understanding the rock mechanical behavior and consequently creating effective hydraulic fractures that is considered an essential step in economically development of Shale assets.;Moreover, running Geomechanical logs on a subset of wells, but having the luxury of generating logs of similar quality for all the existing wells in a Shale asset can prove to be a sound reservoir management tool for better reservoir characterization, modeling and efficient production of Marcellus Shale reservoir
Recommended from our members
Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network.
PurposeTo assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry.MethodsWe trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics.ResultsDice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]).ConclusionsUtilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization
3D medical volume segmentation using hybrid multiresolution statistical approaches
This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations
Multi-layer Architecture For Storing Visual Data Based on WCF and Microsoft SQL Server Database
In this paper we present a novel architecture for storing visual data.
Effective storing, browsing and searching collections of images is one of the
most important challenges of computer science. The design of architecture for
storing such data requires a set of tools and frameworks such as SQL database
management systems and service-oriented frameworks. The proposed solution is
based on a multi-layer architecture, which allows to replace any component
without recompilation of other components. The approach contains five
components, i.e. Model, Base Engine, Concrete Engine, CBIR service and
Presentation. They were based on two well-known design patterns: Dependency
Injection and Inverse of Control. For experimental purposes we implemented the
SURF local interest point detector as a feature extractor and -means
clustering as indexer. The presented architecture is intended for content-based
retrieval systems simulation purposes as well as for real-world CBIR tasks.Comment: Accepted for the 14th International Conference on Artificial
Intelligence and Soft Computing, ICAISC, June 14-18, 2015, Zakopane, Polan
- …