3 research outputs found

    Simulating Microdosimetry in a Virtual Hepatic Lobule

    Get PDF
    The liver plays a key role in removing harmful chemicals from the body and is therefore often the first tissue to suffer potentially adverse consequences. To protect public health it is necessary to quantitatively estimate the risk of long-term low dose exposure to environmental pollutants. Animal testing is the primary tool for extrapolating human risk but it is fraught with uncertainty, necessitating novel alternative approaches. Our goal is to integrate in vitro liver experiments with agent-based cellular models to simulate a spatially extended hepatic lobule. Here we describe a graphical model of the sinusoidal network that efficiently simulates portal to centrilobular mass transfer in the hepatic lobule. We analyzed the effects of vascular topology and metabolism on the cell-level distribution following oral exposure to chemicals. The spatial distribution of metabolically inactive chemicals was similar across different vascular networks and a baseline well-mixed compartment. When chemicals were rapidly metabolized, concentration heterogeneity of the parent compound increased across the vascular network. As a result, our spatially extended lobule generated greater variability in dose-dependent cellular responses, in this case apoptosis, than were observed in the classical well-mixed liver or in a parallel tubes model. The mass-balanced graphical approach to modeling the hepatic lobule is computationally efficient for simulating long-term exposure, modular for incorporating complex cellular interactions, and flexible for dealing with evolving tissues

    Automatic image acquisition, calibration and montage assembly for biological X-ray microscopy

    Get PDF
    Summary We describe a system for the automatic acquisition and processing of digital images in a high-resolution X-ray microscope, including the formation of large-®eld highresolution image montages. A computer-controlled sample positioning stage provides approximate coordinates for each high-resolution subimage. Individual subimages are corrected to compensate for time-varying, non-uniform illumination and CCD-related artefacts. They are then automatically assembled into a montage. The montage assembly algorithm is designed to use the overlap between each subimage and multiple neighbours to improve the performance of the registration step and the ®delity of the result. This is accomplished by explicit use of recorded stage positions, optimized ordering of subimage insertion, and registration of subimages to the developing montage. Using this procedure registration errors are below the resolution limit of the microscope (43 nm). The image produced is a seamless, large-®eld montage at full resolution, assembled automatically without human intervention. Beyond this, it is also an accurate X-ray transmission map that allows the quantitative measurement of anatomical and chemical features of the sample. Applying these tools to a biological problem, we have conducted the largest X-ray microscopical study to date

    The development of generative Bayesian models for classification of cell images

    Get PDF
    A generative model for shape recognition of biological cells in images is developed. The model is designed for analysing high throughput screens, and is tested on a genome wide morphology screen. The genome wide morphology screen contains order of 104 images of fluorescently stained cells with order of 102 cells per image. It was generated using automated techniques through knockdown of almost all putative genes in Drosphila melanogaster. A major step in the analysis of such a dataset is to classify cells into distinct classes: both phenotypic classes and cell cycle classes. However, the quantity of data produced presents a major time bottleneck for human analysis. Human analysis is also known to be subjective and variable. The development of a generalisable computational analysis tool is an important challenge for the field. Previously cell morphology has been characterized by automated measurement of user-defined biological features, often specific to one dataset. These methods are surveyed and discussed. Here a more ambitious approach is pursued. A novel generalisable classification method, applicable to our images, is developed and implemented. The algorithm decomposes training images into constituent patches to build Bayesian models of cell classes. The model contains probability distributions which are learnt via the Expectation Maximization algorithm. This provides a mechanism for comparing the similarity of the appearance of cell phenotypes. The method is evaluated by comparison with results of Support Vector Machines at the task of performing binary classification. This work provides the basis for clustering large sets of cell images into biologically meaningful classes
    corecore