29 research outputs found

    Tissue-Engineered Arterial Tunica Media with Multi-layered, Circumferentially Aligned Smooth Muscle Architecture

    No full text
    Thesis (Master's)--University of Washington, 2018Blood vessels play an important role in drug screening in terms of permeability and control of blood flow through cellular responses. Three distinctive functional layers make up the architecture of blood vessels, including tunica intima, tunica media and tunica externa. Among all layers, the tunica media layer regulates vascular tone and circumferential alignment of smooth muscle cells in tunica media is crucial to constrictive performances of vessels. Although much research has studied the anisotropic alignment of smooth muscle cells, there is yet a method to fabricate anisotropic smooth muscle cells in a three-dimensional hydrogel to mimic native tunica media. This project addresses the need for an in vitro tissue-engineered tunica media model that replicates in vivo architecture of circumferentially aligned smooth muscle cells in tunica media that is robust and reproducible. The project is divided into three phases: (1) A robust method to fabricate three-dimensional tunica media with circumferentially aligned smooth muscle cells and (2) the characterization and assessment on functional properties of tunica media model. Ultimately, the success of this project allows formation of tunica media with native functionalities through cellular remodeling and mechanical properties to serve as a model of tunica media tissue in blood vessels

    A deep generative model of 3D single-cell organization

    No full text
    We introduce a framework for end-to-end integrative modeling of 3D single-cell multi-channel fluorescent image data of diverse subcellular structures. We employ stacked conditional β-variational autoencoders to first learn a latent representation of cell morphology, and then learn a latent representation of subcellular structure localization which is conditioned on the learned cell morphology. Our model is flexible and can be trained on images of arbitrary subcellular structures and at varying degrees of sparsity and reconstruction fidelity. We train our full model on 3D cell image data and explore design trade-offs in the 2D setting. Once trained, our model can be used to predict plausible locations of structures in cells where these structures were not imaged. The trained model can also be used to quantify the variation in the location of subcellular structures by generating plausible instantiations of each structure in arbitrary cell geometries. We apply our trained model to a small drug perturbation screen to demonstrate its applicability to new data. We show how the latent representations of drugged cells differ from unperturbed cells as expected by on-target effects of the drugs.</jats:p

    A deep generative model of 3D single-cell organization

    Full text link
    1AbstractWe introduce a framework for end-to-end integrative modeling of 3D single-cell multi-channel fluorescent image data of diverse subcellular structures. We employ stacked conditional β-variational autoencoders to first learn a latent representation of cell morphology, and then learn a latent representation of subcellular structure localization which is conditioned on the learned cell morphology. Our model is flexible and can be trained on images of arbitrary subcellular structures and at varying degrees of sparsity and reconstruction fidelity. We train our full model on 3D cell image data and explore design trade-offs in the 2D setting. Once trained, our model can be used to impute structures in cells where they were not imaged and to quantify the variation in the location of all subcellular structures by generating plausible instantiations of each structure in arbitrary cell geometries. We apply our trained model to a small drug perturbation screen to demonstrate its applicability to new data. We show how the latent representations of drugged cells differ from unperturbed cells as expected by on-target effects of the drugs.2Author summaryIt’s impossible to acquire all the information we want about every cell we’re interested in in a single experiment. Even just limiting ourselves to imaging, we can only image a small set of subcellular structures in each cell. If we are interested in integrating those images into a holistic picture of cellular organization directly from data, there are a number of approaches one might take. Here, we leverage the fact that of the three channels we image in each cell, two stay the same across the data set; these two channels assess the cell’s shape and nuclear morphology. Given these two reference channels, we learn a model of cell and nuclear morphology, and then use this as a reference frame in which to learn a representation of the localization of each subcellular structure as measured by the third channel. We use β-variational autoencoders to learn representations of both the reference channels and representations of each subcellular structure (conditioned on the reference channels of the cell in which it was imaged). Since these models are both probabilistic and generative, we can use them to understand the variation in the data from which they were trained, to generate instantiations of new cell morphologies, and to generate imputations of structures in real cell images to create an integrated model of subcellular organization.</jats:sec

    Residual block used in this model.

    No full text
    Each layer of our model is a modified residual layer. In the encoder, the layer input, x, is passed through a 4x convolution kernel with a stride of 2, then a 3x convolution kernel with a stride of 1 or a 1x convolution kernel with a subsequent avg-pooling step. These results are summed along the channel dimensions, added, and passed to the next layer. With the decoder, 4x convolution is replaced with transposed convolution, and pooling replaced with linear upsampling. In the case of the conditional model (components with dotted lines) MT, the reference input xr is linearly interpolated to be the same size as the output, and passed through a 1x kernel. The target label is passed through a 1x kernel, and added to each pixel of the output. Spectral weight normalization [36] is utilized at every convolutional or fully-connected operation. In the case of the 3D model the convolutions are three-dimensional, and the 2D model uses two-dimensional convolutions.</p

    Quantification of the coupling of cellular morphology and subcellular structure.

    No full text
    a) shows the relative coupling strength of three structures to the nuclear shape (y-axis) and cell membrane (x-axis) of the cells in which they reside, according to Eq 4. Each point represents a cell; brown points are cells in interphase, blue points are cells undergoing mitosis. b) shows the relative degree of coupling of each structure to the cell membrane or nuclear channel, and how this changes during mitosis.</p

    S3 Fig -

    No full text
    a) Heatmap of Spearman correlations of reference latent space dimensions with single-cell features on all cells in the test set. Cell features are hierarchically clustered. Latent space dimensions are sorted in descending rank by mean absolute deviation from 0, and for clarity only the top 32 dimensions are shown. Dimensions below 32 displayed significantly more noise and less correlation with cell features. b) Mean absolute deviation from 0 of all reference latent space dimensions, sorted by value. Values are computed by averaging over all cells in the test set. c) Explained variance of principal components of the z-scored cell features on all cells in the test set. d) Pearson correlation of the top 32 dimensions of the latent space, computed on all cells in the test set as ranked by mean absolute deviation from 0. We note that these dimensions display little to no correlation structure, empirically verifying the ability of the β-VAE to produce a disentangled latent space. (TIF)</p
    corecore