7 research outputs found

    Autocrine inhibition of cell motility can drive epithelial branching morphogenesis in the absence of growth

    Get PDF
    Epithelial branching morphogenesis drives the development of organs such as the lung, salivary gland, kidney and the mammary gland. It involves cell proliferation, cell differentiation and cell migration. An elaborate network of chemical and mechanical signals between the epithelium and the surrounding mesenchymal tissues regulates the formation and growth of branching organs. Surprisingly, when cultured in isolation from mesenchymal tissues, many epithelial tissues retain the ability to exhibit branching morphogenesis even in the absence of proliferation. In this work, we propose a simple, experimentally plausible mechanism that can drive branching morphogenesis in the absence of proliferation and cross-talk with the surrounding mesenchymal tissue. The assumptions of our mathematical model derive from in vitro observations of the behaviour of mammary epithelial cells. These data show that autocrine secretion of the growth factor TGFβ1 inhibits the formation of cell protrusions, leading to curvature-dependent inhibition of sprouting. Our hybrid cellular Potts and partial-differential equation model correctly reproduces the experimentally observed tissue-geometry-dependent determination of the sites of branching, and it suffices for the formation of self-avoiding branching structures in the absence and also in the presence of cell proliferation. This article is part of the theme issue ‘Multi-scale analysis and modelling of collective migration in biological systems’.</p

    Task-driven learned hyperspectral data reduction using end-to-end supervised deep learning

    Get PDF
    An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods

    A tomographic workflow to enable deep learning for X-ray based foreign object detection

    Get PDF
    Detection of unwanted (‘foreign’) objects within products is a common procedure in many branches of industry for maintaining production quality. X-ray imaging is a fast, non-invasive and widely applicable method for foreign object detection. Deep learning has recently emerged as a powerful approach for recognizing patterns in radiographs (i.e., X-ray images), enabling automated X-ray based foreign object detection. However, these methods require a large number of training examples and manual annotation of these examples is a subjective and laborious task. In this work, we propose a Computed Tomography (CT) based method for producing training data for supervised learning of foreign object detection, with minimal labor requirements. In our approach, a few representative objects are CT scanned and reconstructed in 3D. The radiographs that are acquired as part of the CT-scan data serve as input for the machine learning method. High-quality ground truth locations of the foreign objects are obtained through accurate 3D reconstructions and segmentations. Using these segmented volumes, corresponding 2D segmentations are obtained by creating virtual projections. We outline the benefits of objectively and reproducibly generating training data in this way. In addition, we show how the accuracy depends on the number of objects used for the CT reconstructions. The results show that in this workflow generally only a relatively small number of representative objects (i.e., fewer than 10) are needed to achieve adequate detection performance in an industrial setting

    A collection of 131 CT datasets of pieces of modeling clay containing stones - Part 1 of 5

    No full text
    This submission contains a collection of 131 CT scans of pieces of modeling clay (Play-Doh) with various numbers of stones inserted. The submission is intended as raw supplementary material to reproduce the CT reconstructions and subsequent results in the paper titled "A tomographic workflow enabling deep learning for X-ray based foreign object detection" [Zeegers 2022]. This submission consists of three parts in total

    A collection of X-ray projections of 131 pieces of modeling clay containing stones for machine learning-driven object detection

    No full text
    This submission contains a collection of 235800 X-ray projections of 131 pieces of modeling clay (Play-Doh) with various numbers of stones inserted. The submission is intended as an extensive and easy-to-use training dataset for supervised machine learning driven object detection. The ground truth locations of the stones are included. The data is supplementary material to the paper titled "A tomographic workflow enabling deep learning for X-ray based foreign object detection" [Zeegers 2022]

    mzeegers/ADJUST: ADJUST v1.0.2

    No full text
    Minor release for Zenodo archiving. Code for ADJUST: A Dictionary-Based Joint Reconstruction and Unmixing Method for Spectral Tomography For a full description, please visit https://github.com/mzeegers/ADJUS
    corecore