1,103 research outputs found
Recommended from our members
Advancing Artificial Intelligence in Sensors, Signals, and Imaging Informatics.
ObjectiveTo identify research works that exemplify recent developments in the field of sensors, signals, and imaging informatics.MethodA broad literature search was conducted using PubMed and Web of Science, supplemented with individual papers that were nominated by section editors. A predefined query made from a combination of Medical Subject Heading (MeSH) terms and keywords were used to search both sources. Section editors then filtered the entire set of retrieved papers with each paper having been reviewed by two section editors. Papers were assessed on a three-point Likert scale by two section editors, rated from 0 (do not include) to 2 (should be included). Only papers with a combined score of 2 or above were considered.ResultsA search for papers was executed at the start of January 2019, resulting in a combined set of 1,459 records published in 2018 in 119 unique journals. Section editors jointly filtered the list of candidates down to 14 nominations. The 14 candidate best papers were then ranked by a group of eight external reviewers. Four papers, representing different international groups and journals, were selected as the best papers by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board.ConclusionsThe fields of sensors, signals, and imaging informatics have rapidly evolved with the application of novel artificial intelligence/machine learning techniques. Studies have been able to discover hidden patterns and integrate different types of data towards improving diagnostic accuracy and patient outcomes. However, the quality of papers varied widely without clear reporting standards for these types of models. Nevertheless, a number of papers have demonstrated useful techniques to improve the generalizability, interpretability, and reproducibility of increasingly sophisticated models
High concordance between trained nurses and gastroenterologists in evaluating recordings of small bowel video capsule endoscopy (VCE)
Background & Aims: The video capsule endoscopy (VCE) is an accurate and validated tool to investigate the entire small bowel mucosa, but VCE recordings interpretation by the gastroenterologist is time-consuming. A pre-reading of VCE recordings by an expert nurse could be accurate and cost saving. We assessed the concordance between nurses and gastroenterologists in detecting lesions on VCE examinations. Methods: This was a prospective study enrolling consecutive patients who had undergone VCE in clinical practice. Two trained nurses and two expert gastroenterologists participated in the study. At VCE pre-reading the nurses selected any abnormalities, saved them as “thumbnails” and classified the detected lesions as a vascular abnormality, ulcerative lesion, polyp, tumor mass, and unclassified lesion. Then, the gastroenterologist evaluated and interpreted the selected lesions and, successively, reviewed the entire video for potential missed lesions. The time for VCE evaluation was recorded. Results: A total of 95 VCE procedures performed on consecutive patients (M/F: 47/48; mean age: 63 ± 12 years, range: 27−86 years) were evaluated. Overall, the nurses detected at least one lesion in 54 (56.8%) patients. There was total agreement between nurses and gastroenterologists, no missing lesions being discovered at a second look of the entire VCE recording by the physician. The pre-reading procedure by nurse allowed a time reduction of medical evaluation from 49 (33-69) to 10 (8-16) minutes (difference:-79.6%). Conclusions: Our data suggest that trained nurses can accurately identify and select relevant lesions in thumbnails that subsequently were faster reviewed by the gastroenterologist for a final diagnosis. This could significantly reduce the cost of VCE procedure
Intelligent Hemorrhage Identification in Wireless Capsule Endoscopy Pictures Using AI Techniques.
Image segmentation in medical images is performed to extract valuable information from the images by concentrating on the region of interest. Mostly, the number of medical images generated from a diagnosis is large and not ideal to treat with traditional ways of segmentation using machine learning models due to their numerous and complex features. To obtain crucial features from this large set of images, deep learning is a good choice over traditional machine learning algorithms. Wireless capsule endoscopy images comprise normal and sick frames and often suffers with a big data imbalance ratio which is sometimes 1000:1 for normal and sick classes. They are also special type of confounding images due to movement of the (capsule) camera, organs and variations in luminance to capture the site texture inside the body. So, we have proposed an automatic deep learning model based to detect bleeding frames out of the WCE images. The proposed model is based on Convolutional Neural Network (CNN) and its performance is compared with state-of- the-art methods including Logistic Regression, Support Vector Machine, Artificial Neural Network and Random Forest. The proposed model reduces the computational burden by offering the automatic feature extraction. It has promising accuracy with an F1 score of 0.76
- …