646 research outputs found

    CAD system for lung nodule analysis.

    Get PDF
    Lung cancer is the deadliest type of known cancer in the United States, claiming hundreds of thousands of lives each year. However, despite the high mortality rate, the 5-year survival rate after resection of Stage 1A non–small cell lung cancer is currently in the range of 62%– 82% and in recent studies even 90%. Patient survival is highly correlated with early detection. Computed Tomography (CT) technology services the early detection of lung cancer tremendously by offering a minimally invasive medical diagnostic tool. Some early types of lung cancer begin with a small mass of tissue within the lung, less than 3 cm in diameter, called a nodule. Most nodules found in a lung are benign, but a small population of them becomes malignant over time. Expert analysis of CT scans is the first step in determining whether a nodule presents a possibility for malignancy but, due to such low spatial support, many potentially harmful nodules go undetected until other symptoms motivate a more thorough search. Computer Vision and Pattern Recognition techniques can play a significant role in aiding the process of detecting and diagnosing lung nodules. This thesis outlines the development of a CAD system which, given an input CT scan, provides a functional and fast, second-opinion diagnosis to physicians. The entire process of lung nodule screening has been cast as a system, which can be enhanced by modern computing technology, with the hopes of providing a feasible diagnostic tool for clinical use. It should be noted that the proposed CAD system is presented as a tool for experts—not a replacement for them. The primary motivation of this thesis is the design of a system that could act as a catalyst for reducing the mortality rate associated with lung cancer

    CellCognition : time-resolved phenotype annotation in high-throughput live cell imaging

    Get PDF
    Author Posting. © The Authors, 2010. This is the author's version of the work. It is posted here by permission of Nature Publishing Group for personal use, not for redistribution. The definitive version was published in Nature Methods 7 (2010): 747-754, doi:10.1038/nmeth.1486.Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here, we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. The incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions, and confusion between different functional states with similar morphology. We demonstrate generic applicability in a set of different assays and perturbation conditions, including a candidate-based RNAi screen for mitotic exit regulators in human cells. CellCognition is published as open source software, enabling live imaging-based screening with assays that directly score cellular dynamics.Work in the Gerlich laboratory is supported by Swiss National Science Foundation (SNF) research grant 3100A0-114120, SNF ProDoc grant PDFMP3_124904, a European Young Investigator (EURYI) award of the European Science Foundation, an EMBO YIP fellowship, and a MBL Summer Research Fellowship to D.W.G., an ETH TH grant, a grant by the UBS foundation, a Roche Ph.D. fellowship to M.H.A.S, and a Mueller fellowship of the Molecular Life Sciences Ph.D. program Zurich to M.H. M.H. and M.H.A.S are fellows of the Zurich Ph.D. Program in Molecular Life Sciences. B.F. was supported by European Commission’s seventh framework program project Cancer Pathways. Work in the Ellenberg laboratory is supported by a European Commission grant within the Mitocheck consortium (LSHG-CT-2004-503464). Work in the Peter laboratory is supported by the ETHZ, Oncosuisse, SystemsX.ch (LiverX) and the SNF

    Classification of geometric forms in mosaics using deep neural network

    Get PDF
    The paper addresses an image processing problem in the field of fine arts. In particular, a deep learning-based technique to classify geometric forms of artworks, such as paintings and mosaics, is presented. We proposed and tested a convolutional neural network (CNN)-based framework that autonomously quantifies the feature map and classifies it. Convolution, pooling and dense layers are three distinct categories of levels that generate attributes from the dataset images by introducing certain specified filters. As a case study, a Roman mosaic is considered, which is digitally reconstructed by close-range photogrammetry based on standard photos. During the digital transformation from a 2D perspective view of the mosaic into an orthophoto, each photo is rectified (i.e., it is an orthogonal projection of the real photo on the plane of the mosaic). Image samples of the geometric forms, e.g., triangles, squares, circles, octagons and leaves, even if they are partially deformed, were extracted from both the original and the rectified photos and originated the dataset for testing the CNN-based approach. The proposed method has proved to be robust enough to analyze the mosaic geometric forms, with an accuracy higher than 97%. Furthermore, the performance of the proposed method was compared with standard deep learning frameworks. Due to the promising results, this method can be applied to many other pattern identification problems related to artworks

    Tissue thickness measurement tool for craniofacial reconstruction

    Get PDF
    Craniofacial Reconstruction is a method of recreating the appearance of the face on the skull of a deceased individual for identification purposes. Older clay methods of reconstruction are inaccurate, time consuming and inflexible. The tremendous increase in the processing power of the computers and rapid strides in visualization can be used to perform the reconstruction, saving time and providing greater accuracy and flexibility, without the necessity for a skillful modeler.;This thesis introduces our approach to computerized 3D craniofacial reconstruction. Three phases have been identified. The first phase of the project is to generate a facial tissue thickness database. In the second phase this database along with a 3D facial components database is to be used to generate a generic facial mask which is draped over the skull to recreate the facial appearance. This face is to be identified from a database of images in the third phase.;Tissue thickness measurements are necessary to generate the facial model over the skull. The thesis emphasis is on the first phase of the project. An automated facial tissue thickness measurement tool (TTMT) has been developed to populate this database

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos científicos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anatómicos, es importante generar imágenes precisas, ya que una mala interpretación de las mismas puede producir errores graves en el diagnóstico de enfermedades o en la planificación de operaciones quirúrgicas. En estos casos, mejorar la percepción de las zonas de interés, facilita la comprensión de la información inherente a los datos. Durante décadas, los investigadores se han centrado en el desarrollo de técnicas para mejorar la visualización de datos volumétricos. Por ejemplo, los métodos que permiten definir buenas funciones de transferencia son clave, ya que éstas determinan cómo se clasifican los materiales. Otros ejemplos son las técnicas que simulan modelos de iluminación realista, que permiten percibir mejor la distribución espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracción necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepción de los elementos del volumen, ya sea modificando el modelo de iluminación aplicado en la visualización, o simulando efectos ilustrativos. Aprovechando la capacidad de cálculo de los nuevos procesadores gráficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepción de detalles locales, proponemos modificar el modelo de iluminación utilizando una conocida herramienta de procesado de imágenes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. También se presentan diferentes técnicas para mejorar la percepción de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminación teniendo en cuenta la oclusión ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusión del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusión en cada vóxel. Además de estas dos técnicas, también se propone mejorar la percepción espacial y de la profundidad de ciertas estructuras mediante la generación de halos. La técnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ningún tipo de información contextual. Para mejorar la percepción de la profundidad, proponemos una nueva técnica basada en cambiar la forma en la que se acumula la intensidad en MIP. También se describe un esquema de color para mejorar la percepción espacial de los elementos visualizados. La última contribución de la tesis es una herramienta de manipulación directa de los datos, que permite preservar la información contextual cuando se realizan cortes en el modelo de volumen. Basada en técnicas ilustrativas tradicionales, esta técnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de interés se visualizan a diferentes alturas sobre la sección, lo que permite al observador percibirlas correctamente

    Advancing Molecular Simulations of Crystal Nucleation: Applications to Clathrate Hydrates

    Get PDF
    Crystallization is a fundamental physical phenomenon with broad impacts in science and engineering. Nonetheless, mechanisms of crystallization in many systems remain incompletely understood. Molecular dynamics (MD) simulations are a powerful computational technique that, in principle, are well-suited to offer insights into the mechanisms of crystallization. Unfortunately, the waiting time required to observe crystal nucleation in simulated systems often falls far beyond the limits of modern MD simulations. This rare-event problem is the primary barrier to simulation studies of crystallization in complex systems. This dissertation takes a combined approach to advance simulation studies of nucleation in complex systems. First, we apply existing tools to a challenging problem — clathrate hydrate nucleation. We then use methods development, software development, and machine learning to address the specific challenges to simulation studies of crystallization posed by the rare-event problem. Clathrate hydrate formation is an exemplar of crystallization in complex systems. Nucleation of clathrate hydrates generally occurs in systems with interfaces, and even homogeneous hydrate nucleation is inherently a multicomponent process. We address two aspects of clathrate hydrate nucleation which are not well-studied. The first aspect is the effects of interfaces on clathrate hydrate nucleation. Interfaces are common in hydrate systems, yet there are few studies probing the effects of interfaces on clathrate hydrate nucleation. We find that nucleation occurs through a homogeneous mechanism near model hydrophobic and hydrophilic surfaces. The only effect of the surfaces is through a partitioning of guest molecules which results in aggregation of guest molecules at the hydrophobic surface. The second aspect is the effect of guest solubility in water on the homogeneous nucleation mechanism. Experiments show that soluble guests act as strong promoter molecules for hydrate formation, but the molecular mechanisms of this effect are unclear. We apply forward flux sampling (FFS) and a committor analysis to identify good approximations of the reaction coordinate for homogeneous nucleation of hydrates formed from a water-soluble guest molecule. Our results suggest the possibility that the nucleation mechanism for hydrates formed from water-soluble guest molecules is different than the nucleation mechanism for hydrates formed from sparingly soluble guest molecules. FFS studies of crystal nucleation can require hundreds of thousands of individual MD simulations. For complex systems, these simulations easily generate terabytes of intermediate data. Furthermore, each simulation must be completed, analyzed, and individually processed based upon the behavior of the system. The scale of these calculations thus quickly exceeds the practical limits of traditional scripting tools (e.g., bash). In order to apply FFS to study clathrate hydrate nucleation we developed a software package, SAFFIRE. SAFFIRE automates and manages FFS with a user-friendly interface. It is compatible with any simulation software and/or analysis codes. Since SAFFIRE is built on the Hadoop framework, it easily scales to tens or hundreds of nodes. SAFFIRE can be deployed on commodity computing clusters such as the Palmetto cluster at Clemson University or XSEDE resources. Studying crystal nucleation in simulations generally requires selecting an order parameter for advanced sampling a priori. This is particularly challenging since one of the very goals of the study itself may be to elucidate the nucleation mechanism, and thus order parameters that provide a good description of the nucleation process. Furthermore, despite many strengths of FFS, it is somewhat more sensitive to the choice of order parameter than some other advanced sampling methods. To address these challenges, we develop a new method, contour forward flux sampling (cFFS), to perform FFS with multiple order parameters simultaneously. cFFS places nonlinear interfaces on-the-fly from the collective progress of the simulations, without any prior knowledge of the energy landscape or appropriate combination of order parameters. cFFS thus allows testing multiple prospective order parameters on-the-fly. Order parameters clearly play a key role in simulation studies of crystal nucleation. However, developing new order parameters is difficult and time consuming. Using ideas from computer vision, we adapt a specific type of neural network called a PointNet to identify local structural environments (e.g., crystalline environments) in molecular simulations. Our approach requires no system-specific feature engineering and operates on the raw output of the simulations, i.e., atomic positions. We demonstrate the method on crystal structure identification in Lennard-Jones, water, and mesophase systems. The method can even predict the crystal phases of atoms near external interfaces. We demonstrate the versatility of our approach by using our method to identify surface hydrophobicity based solely upon positions and orientations of nearby water molecules. Our results suggest the approach will be broadly applicable to many types of local structure in simulations. We address several interdependent challenges to studying crystallization in molecular simulations by combining software development, method development, and machine learning. While motivated by specific challenges identified during studies of clathrate hydrate nucleation, these contributions help extend the applicability of molecular simulations to crystal nucleation in a broad variety of systems. The next step of the development cycle is to apply these methods on complex systems to motivate further improvements. We believe that continued integration of software, methods, and machine learning will prove a fruitful framework for improving molecular simulations of crystal nucleation
    • …
    corecore