15 research outputs found

    Prediction of localization and interactions of apoptotic proteins

    Get PDF
    During apoptosis several mitochondrial proteins are released. Some of them participate in caspase-independent nuclear DNA degradation, especially apoptosis-inducing factor (AIF) and endonuclease G (endoG). Another interesting protein, which was expected to act similarly as AIF due to the high sequence homology with AIF is AIF-homologous mitochondrion-associated inducer of death (AMID). We studied the structure, cellular localization, and interactions of several proteins in silico and also in cells using fluorescent microscopy. We found the AMID protein to be cytoplasmic, most probably incorporated into the cytoplasmic side of the lipid membranes. Bioinformatic predictions were conducted to analyze the interactions of the studied proteins with each other and with other possible partners. We conducted molecular modeling of proteins with unknown 3D structures. These models were then refined by MolProbity server and employed in molecular docking simulations of interactions. Our results show data acquired using a combination of modern in silico methods and image analysis to understand the localization, interactions and functions of proteins AMID, AIF, endonuclease G, and other apoptosis-related proteins

    Generative modeling of living cells with SO(3)-equivariant implicit neural representations

    Full text link
    Data-driven cell tracking and segmentation methods in biomedical imaging require diverse and information-rich training data. In cases where the number of training samples is limited, synthetic computer-generated data sets can be used to improve these methods. This requires the synthesis of cell shapes as well as corresponding microscopy images using generative models. To synthesize realistic living cell shapes, the shape representation used by the generative model should be able to accurately represent fine details and changes in topology, which are common in cells. These requirements are not met by 3D voxel masks, which are restricted in resolution, and polygon meshes, which do not easily model processes like cell growth and mitosis. In this work, we propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks. We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain, conditioned on a learned latent code that is disentangled from the rotation of the cell shape. We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells). A quantitative evaluation using shape features, Hausdorff distance, and Dice similarity coefficients of real and synthetic cell shapes shows that our model can generate topologically plausible complex cell shapes in 3D+time with high similarity to real living cell shapes. Finally, we show how microscopy images of living cells that correspond to our generated cell shapes can be synthesized using an image-to-image model.Comment: Medical Image Analysis 2023 (Submitted

    Virtual microscope interface to high resolution histological images

    Get PDF
    The Hypertext atlas of Dermatopathology, the Atlas of Fetal and Neonatal Pathology and Hypertext atlas of Pathology (this one in Czech only) are available at . These atlases offer many clinical, macroscopic and microscopic images, together with short introductory texts. Most of the images are annotated and arrows pointing to the important parts of the image can be activated

    Model-Based Generation of Synthetic 3D Time-Lapse Sequences of Motile Cells with Growing Filopodia

    Get PDF
    International audienceThe existence of benchmark datasets is essential to objectively evaluate various image analysis methods. Nevertheless, manual annotations of fluorescence microscopy image data are very laborious and not often practicable, especially in the case of 3D+t experiments. In this work, we propose a simulation system capable of generating 3D time-lapse sequences of single motile cells with filopodial protrusions, accompanied by inherently generated ground truth. The system consists of three globally synchronized modules, each responsible for a separate task: the evolution of filopodia on a molecular level, linear elastic deformation of the entire cell with filopodia, and generation of realistic, time-coherent cell texture. The capability of our system is demonstrated by generating a synthetic 3D time-lapse sequence of a single lung cancer cell with two growing filopodia, visually resembling its real counterpart acquired using a confocal fluorescence microscope

    MIFA: Metadata, Incentives, Formats, and Accessibility guidelines to improve the reuse of AI datasets for bioimage analysis

    Full text link
    Artificial Intelligence methods are powerful tools for biological image analysis and processing. High-quality annotated images are key to training and developing new methods, but access to such data is often hindered by the lack of standards for sharing datasets. We brought together community experts in a workshop to develop guidelines to improve the reuse of bioimages and annotations for AI applications. These include standards on data formats, metadata, data presentation and sharing, and incentives to generate new datasets. We are positive that the MIFA (Metadata, Incentives, Formats, and Accessibility) recommendations will accelerate the development of AI tools for bioimage analysis by facilitating access to high quality training data.Comment: 16 pages, 3 figure

    The Cell Tracking Challenge: 10 years of objective benchmarking

    Get PDF
    The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a signifcant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.Web of Science2071020101

    ESTIMATING LARGE LOCAL MOTION IN LIVE-CELL IMAGING USING VARIATIONAL OPTICAL FLOW Towards Motion Tracking in Live Cell Imaging Using Optical Flow

    No full text
    Abstract: The paper studies the application of state-of-the-art variational optical flow methods for motion tracking of fluorescently labeled targets in living cells. Four variants of variational optical flow methods suitable for this task are briefly described and evaluated in terms of the average angular error. Artificial ground-truth image sequences were generated for the purpose of this evaluation. The aim was to compare the ability of those methods to estimate local divergent motion and their suitability for data with combined global and local motion. Parametric studies were performed in order to find the most suitable parameter adjustment. It is shown that a selected optimally tuned method tested on real 3D input data produced satisfactory results. Finally, it is shown that by using appropriate numerical solution, reasonable computational times can be achieved even for 3D image sequences.

    Contents

    No full text
    1.1 Live cell studies................................. 4 1.1.1 Time-lapse observation and its analysis................ 4 1.1.2 The acquisition and parameters of input data............

    FiloGen: A Model-Based Generator of Synthetic 3D Time-Lapse Sequences of Single Motile Cells with Growing and Branching Filopodia

    Get PDF
    International audienceThe existence of diverse image datasets accompanied by reference annotations is a crucial prerequisite for an objective benchmarking of bioimage analysis methods. Nevertheless, such a prerequisite is arduous to satisfy for time-lapse, multidimensional fluorescence microscopy image data, manual annotations of which are laborious and often impracticable. In this paper, we present a simulation system capable of generating 3D time-lapse sequences of single motile cells with filopodial protrusions of user-controlled structural and temporal attributes, such as the number, thickness, length, level of branching, and lifetime of filopodia, accompanied by inherently generated reference annotations. The proposed simulation system involves three globally synchronized modules, each being responsible for a separate task: the evolution of filopodia on a molecular level, linear elastic deformation of the entire cell with filopodia, and the synthesis of realistic, time-coherent cell texture. Its flexibility is demonstrated by generating multiple synthetic 3D time-lapse sequences of single lung cancer cells of two different phenotypes, qualitatively and quantitatively resembling their real counterparts acquired using a confocal fluorescence microscope
    corecore