1,076 research outputs found
Conceptual Framework of Information Retrieval System in the Field of Gastroenterology and Hepatology
Acromegaly: pathogenesis & treatment
Acromegaly is a multi-system disorder whose etiology is most often traced back to a growth hormone-secreting pituitary adenoma (PA). Growth hormone (GH) secretion promotes insulin-like growth factor 1 (IGF-1) release from peripheral tissues, leading to the clinical manifestations of acromegaly. Current treatment methods for acromegaly include surgery, medical therapy, and radiation therapy. The goals of acromegaly treatment are to reduce GH levels and IGF-1 levels to age/sex-normalized levels, relieve comorbidities, normalize mortality rate, and to remove the pituitary mass causing high hormone levels. This study aims to provide a comprehensive review of current treatment methods and an analysis of novel therapies for treatment of acromegaly.
The primary treatment method of acromegaly is surgery due to limited complications, relatively low cost, and remission in the majority of cases. However, surgery is not an effective treatment method for invasive macroadenomas with extension into the intracranial space. Medical therapies such as dopamine agonists (DAs) and somatostatin receptor ligands (SRLs) are effective at reducing GH and IGF-1 levels and may have anti-tumor effects. However, DAs are only effective at treating minor elevations in GH and IGF-1 levels and SRLs may cause hyperglycemia after prolonged treatment. In contrast to DAs and SRLs, Pegvisomant does not have anti-tumor effects, but it is more effective at reducing GH and IGF-1 levels. The disadvantages of Pegvisomant are the possibility of irreversible liver damage and the overwhelming cost of treatment. Stereotactic radiosurgery (SRS) is another mode of treatment for acromegaly, however, there are many disadvantages to SRS including prolonged latency period, hypopituitarism, radio-necrosis of normal brain tissue, and secondary tumor formation. Novel therapies for acromegaly include antisense drugs and modified botulin neurotoxins. Despite the success of antisense drugs and modified botulin neurotoxins in animal models, greater research is required prior to application in human clinical trials. Gene therapy is an emerging treatment method for acromegaly and proper manipulation of viral immunogenic effects could prove as a successful treatment for large macroadenomas, invasive PAs, and recurrent PAs.
Despite the success of surgery in treating microadenomas and noninvasive macroadenomas, therapeutic alternatives must be explored to treat invasive PAs, macroadenomas, and recurrent PAs. Future research in immunotherapies and gene therapies may provide greater insight into the development of more effective and less invasive treatment methods for acromegaly
Recommended from our members
Machine learning based small bowel video capsule endoscopy analysis: Challenges and opportunities
YesVideo capsule endoscopy (VCE) is a revolutionary technology for the early diagnosis of gastric disorders. However, owing to the high redundancy and subtle manifestation of anomalies among thousands of frames, the manual construal of VCE videos requires considerable patience, focus, and time. The automatic analysis of these videos using computational methods is a challenge as the capsule is untamed in motion and captures frames inaptly. Several machine learning (ML) methods, including recent deep convolutional neural networks approaches, have been adopted after evaluating their potential of improving the VCE analysis. However, the clinical impact of these methods is yet to be investigated. This survey aimed to highlight the gaps between existing ML-based research methodologies and clinically significant rules recently established by gastroenterologists based on VCE. A framework for interpreting raw frames into contextually relevant frame-level findings and subsequently merging these findings with meta-data to obtain a disease-level diagnosis was formulated. Frame-level findings can be more intelligible for discriminative learning when organized in a taxonomical hierarchy. The proposed taxonomical hierarchy, which is formulated based on pathological and visual similarities, may yield better classification metrics by setting inference classes at a higher level than training classes. Mapping from the frame level to the disease level was structured in the form of a graph based on clinical relevance inspired by the recent international consensus developed by domain experts. Furthermore, existing methods for VCE summarization, classification, segmentation, detection, and localization were critically evaluated and compared based on aspects deemed significant by clinicians. Numerous studies pertain to single anomaly detection instead of a pragmatic approach in a clinical setting. The challenges and opportunities associated with VCE analysis were delineated. A focus on maximizing the discriminative power of features corresponding to various subtle lesions and anomalies may help cope with the diverse and mimicking nature of different VCE frames. Large multicenter datasets must be created to cope with data sparsity, bias, and class imbalance. Explainability, reliability, traceability, and transparency are important for an ML-based diagnostics system in a VCE. Existing ethical and legal bindings narrow the scope of possibilities where ML can potentially be leveraged in healthcare. Despite these limitations, ML based video capsule endoscopy will revolutionize clinical practice, aiding clinicians in rapid and accurate diagnosis
Detection of Intestinal Bleeding in Wireless Capsule Endoscopy using Machine Learning Techniques
Gastrointestinal (GI) bleeding is very common in humans, which may lead to fatal consequences. GI bleeding can usually be identified using a flexible wired endoscope. In 2001, a newer diagnostic tool, wireless capsule endoscopy (WCE) was introduced. It is a swallow-able capsule-shaped device with a camera that captures thousands of color images and wirelessly sends those back to a data recorder. After that, the physicians analyze those images in order to identify any GI abnormalities. But it takes a longer screening time which may increase the danger of the patients in emergency cases. It is therefore necessary to use a real-time detection tool to identify bleeding in the GI tract.
Each material has its own spectral ‘signature’ which shows distinct characteristics in specific wavelength of light [33]. Therefore, by evaluating the optical characteristics, the presence of blood can be detected. In the study, three main hardware designs were presented: one using a two-wavelength based optical sensor and others using two six-wavelength based spectral sensors with AS7262 and AS7263 chips respectively to determine the optical characteristics of the blood and non-blood samples.
The goal of the research is to develop a machine learning model to differentiate blood samples (BS) and non-blood samples (NBS) by exploring their optical properties. In this experiment, 10 levels of crystallized bovine hemoglobin solutions were used as BS and 5 food colors (red, yellow, orange, tan and pink) with different concentrations totaling 25 non-blood samples were used as NBS. These blood and non-blood samples were also combined with pig’s intestine to mimic in-vivo experimental environment. The collected samples were completely separated into training and testing data.
Different spectral features are analyzed to obtain the optical information about the samples. Based on the performance on the selected most significant features of the spectral wavelengths, k-nearest neighbors algorithm (k-NN) is finally chosen for the automated bleeding detection. The proposed k-NN classifier model has been able to distinguish the BS and NBS with an accuracy of 91.54% using two wavelengths features and around 89% using three combined wavelengths features in the visible and near-infrared spectral regions. The research also indicates that it is possible to deploy tiny optical detectors to detect GI bleeding in a WCE system which could eliminate the need of time-consuming image post-processing steps
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Deep learning to find colorectal polyps in colonoscopy: A systematic literature review
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.This work was partially supported by PICCOLO project. This project has received funding from the European Union's Horizon2020 Research and Innovation Programme under grant agreement No. 732111. The sole responsibility of this publication lies with the author. The European Union is not responsible for any use that may be made of the information contained therein. The authors would also like to thank Dr. Federico Soria for his support on this manuscript and Dr. José Carlos Marín, from Hospital 12 de Octubre, and Dr. Ángel Calderón and Dr. Francisco Polo, from Hospital de Basurto, for the images in Fig. 4
Making sense out of massive data by going beyond differential expression
With the rapid growth of publicly available high-throughput transcriptomic data, there is increasing recognition that large sets of such data can be mined to better understand disease states and mechanisms. Prior gene expression analyses, both large and small, have been dichotomous in nature, in which phenotypes are compared using clearly defined controls. Such approaches may require arbitrary decisions about what are considered “normal” phenotypes, and what each phenotype should be compared to. Instead, we adopt a holistic approach in which we characterize phenotypes in the context of a myriad of tissues and diseases. We introduce scalable methods that associate expression patterns to phenotypes in order both to assign phenotype labels to new expression samples and to select phenotypically meaningful gene signatures. By using a nonparametric statistical approach, we identify signatures that are more precise than those from existing approaches and accurately reveal biological processes that are hidden in case vs. control studies. Employing a comprehensive perspective on expression, we show how metastasized tumor samples localize in the vicinity of the primary site counterparts and are overenriched for those phenotype labels. We find that our approach provides insights into the biological processes that underlie differences between tissues and diseases beyond those identified by traditional differential expression analyses. Finally, we provide an online resource (http://concordia.csail.mit.edu) for mapping users’ gene expression samples onto the expression landscape of tissue and disease
- …