4,565 research outputs found
Hacia el modelado 3d de tumores cerebrales mediante endoneurosonografĂa y redes neuronales
Las cirugĂas mĂnimamente invasivas se han vuelto populares debido a que implican menos riesgos con respecto a las intervenciones tradicionales. En neurocirugĂa, las tendencias recientes sugieren el uso conjunto de la endoscopia y el ultrasonido, tĂ©cnica llamada endoneurosonografĂa (ENS), para la virtualizaciĂłn 3D de las estructuras del cerebro en tiempo real. La informaciĂłn ENS se puede utilizar para generar modelos 3D de los tumores del cerebro durante la cirugĂa. En este trabajo, presentamos una metodologĂa para el modelado 3D de tumores cerebrales con ENS y redes neuronales. EspecĂficamente, se estudiĂł el uso de mapas auto-organizados (SOM) y de redes neuronales tipo gas (NGN). En comparaciĂłn con otras tĂ©cnicas, el modelado 3D usando redes neuronales ofrece ventajas debido a que la morfologĂa del tumor se codifica directamente sobre los pesos sinápticos de la red, no requiere ningĂşn conocimiento a priori y la representaciĂłn puede ser desarrollada en dos etapas: entrenamiento fuera de lĂnea y adaptaciĂłn en lĂnea. Se realizan pruebas experimentales con maniquĂes mĂ©dicos de tumores cerebrales. Al final del documento, se presentan los resultados del modelado 3D a partir de una base de datos ENS.Minimally invasive surgeries have become popular because they reduce the typical risks of traditional interventions. In neurosurgery, recent trends suggest the combined use of endoscopy and ultrasound (endoneurosonography or ENS) for 3D virtualization of brain structures in real time. The ENS information can be used to generate 3D models of brain tumors during a surgery. This paper introduces a methodology for 3D modeling of brain tumors using ENS and unsupervised neural networks. The use of self-organizing maps (SOM) and neural gas networks (NGN) is particularly studied. Compared to other techniques, 3D modeling using neural networks offers advantages, since tumor morphology is directly encoded in synaptic weights of the network, no a priori knowledge is required, and the representation can be developed in two stages: off-line training and on-line adaptation. Experimental tests were performed using virtualized phantom brain tumors. At the end of the paper, the results of 3D modeling from an ENS database are presented
Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow
We propose a method to classify cardiac pathology based on a novel approach
to extract image derived features to characterize the shape and motion of the
heart. An original semi-supervised learning procedure, which makes efficient
use of a large amount of non-segmented images and a small amount of images
segmented manually by experts, is developed to generate pixel-wise apparent
flow between two time points of a 2D+t cine MRI image sequence. Combining the
apparent flow maps and cardiac segmentation masks, we obtain a local apparent
flow corresponding to the 2D motion of myocardium and ventricular cavities.
This leads to the generation of time series of the radius and thickness of
myocardial segments to represent cardiac motion. These time series of motion
features are reliable and explainable characteristics of pathological cardiac
motion. Furthermore, they are combined with shape-related features to classify
cardiac pathologies. Using only nine feature values as input, we propose an
explainable, simple and flexible model for pathology classification. On ACDC
training set and testing set, the model achieves 95% and 94% respectively as
classification accuracy. Its performance is hence comparable to that of the
state-of-the-art. Comparison with various other models is performed to outline
some advantages of our model
Learning Algorithms for Fat Quantification and Tumor Characterization
Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Brain Tumor Detection and Segmentation in Multisequence MRI
Tato práce se zabĂ˝vá detekcĂ a segmentacĂ mozkovĂ©ho nádoru v multisekvenÄŤnĂch MR obrazech se zaměřenĂm na gliomy vysokĂ©ho a nĂzkĂ©ho stupnÄ› malignity. Jsou zde pro tento účel navrĹľeny tĹ™i metody. PrvnĂ metoda se zabĂ˝vá detekcĂ prezence částĂ mozkovĂ©ho nádoru v axiálnĂch a koronárnĂch Ĺ™ezech. Jedná se o algoritmus zaloĹľenĂ˝ na analĂ˝ze symetrie pĹ™i rĹŻznĂ˝ch rozlišenĂch obrazu, kterĂ˝ byl otestován na T1, T2, T1C a FLAIR obrazech. Druhá metoda se zabĂ˝vá extrakcĂ oblasti celĂ©ho mozkovĂ©ho nádoru, zahrnujĂcĂ oblast jádra tumoru a edĂ©mu, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkovĂ˝ nádor z 2D i 3D obrazĹŻ. Je zde opÄ›t vyuĹľita analĂ˝za symetrie, která je následována automatickĂ˝m stanovenĂm intenzitnĂho prahu z nejvĂce asymetrickĂ˝ch částĂ. TĹ™etĂ metoda je zaloĹľena na predikci lokálnĂ struktury a je schopna segmentovat celou oblast nádoru, jeho jádro i jeho aktivnà část. Metoda vyuĹľĂvá faktu, Ĺľe vÄ›tšina lĂ©kaĹ™skĂ˝ch obrazĹŻ vykazuje vysokou podobnost intenzit sousednĂch pixelĹŻ a silnou korelaci mezi intenzitami v rĹŻznĂ˝ch obrazovĂ˝ch modalitách. JednĂm ze zpĹŻsobĹŻ, jak s touto korelacĂ pracovat a pouĹľĂvat ji, je vyuĹľitĂ lokálnĂch obrazovĂ˝ch polĂ. Podobná korelace existuje takĂ© mezi sousednĂmi pixely v anotaci obrazu. Tento pĹ™Ăznak byl vyuĹľit v predikci lokálnĂ struktury pĹ™i lokálnĂ anotaci polĂ. Jako klasifikaÄŤnĂ algoritmus je v tĂ©to metodÄ› pouĹľita konvoluÄŤnĂ neuronová sĂĹĄ vzhledem k jejĂ známe schopnosti zacházet s korelacĂ mezi pĹ™Ăznaky. Všechny tĹ™i metody byly otestovány na veĹ™ejnĂ© databázi 254 multisekvenÄŤnĂch MR obrazech a byla dosáhnuta pĹ™esnost srovnatelná s nejmodernÄ›jšĂmi metodami v mnohem kratšĂm vĂ˝poÄŤetnĂm ÄŤase (v řádu sekund pĹ™i pouĹľitĂ˝ CPU), coĹľ poskytuje moĹľnost manuálnĂch Ăşprav pĹ™i interaktivnĂ segmetaci.This work deals with the brain tumor detection and segmentation in multisequence MR images with particular focus on high- and low-grade gliomas. Three methods are propose for this purpose. The first method deals with the presence detection of brain tumor structures in axial and coronal slices. This method is based on multi-resolution symmetry analysis and it was tested for T1, T2, T1C and FLAIR images. The second method deals with extraction of the whole brain tumor region, including tumor core and edema, in FLAIR and T2 images and is suitable to extract the whole brain tumor region from both 2D and 3D. It also uses the symmetry analysis approach which is followed by automatic determination of the intensity threshold from the most asymmetric parts. The third method is based on local structure prediction and it is able to segment the whole tumor region as well as tumor core and active tumor. This method takes the advantage of a fact that most medical images feature a high similarity in intensities of nearby pixels and a strong correlation of intensity profiles across different image modalities. One way of dealing with -- and even exploiting -- this correlation is the use of local image patches. In the same way, there is a high correlation between nearby labels in image annotation, a feature that has been used in the ``local structure prediction'' of local label patches. Convolutional neural network is chosen as a learning algorithm, as it is known to be suited for dealing with correlation between features. All three methods were evaluated on a public data set of 254 multisequence MR volumes being able to reach comparable results to state-of-the-art methods in much shorter computing time (order of seconds running on CPU) providing means, for example, to do online updates when aiming at an interactive segmentation.
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions
Medical Image Analysis is currently experiencing a paradigm shift due to Deep
Learning. This technology has recently attracted so much interest of the
Medical Imaging community that it led to a specialized conference in `Medical
Imaging with Deep Learning' in the year 2018. This article surveys the recent
developments in this direction, and provides a critical review of the related
major aspects. We organize the reviewed literature according to the underlying
Pattern Recognition tasks, and further sub-categorize it following a taxonomy
based on human anatomy. This article does not assume prior knowledge of Deep
Learning and makes a significant contribution in explaining the core Deep
Learning concepts to the non-experts in the Medical community. Unique to this
study is the Computer Vision/Machine Learning perspective taken on the advances
of Deep Learning in Medical Imaging. This enables us to single out `lack of
appropriately annotated large-scale datasets' as the core challenge (among
other challenges) in this research direction. We draw on the insights from the
sister research fields of Computer Vision, Pattern Recognition and Machine
Learning etc.; where the techniques of dealing with such challenges have
already matured, to provide promising directions for the Medical Imaging
community to fully harness Deep Learning in the future
- …