596 research outputs found

    Cell Image Segmentation with Kernel-Based Dynamic Clustering and an Ellipsoidal Cell Shape Model

    Get PDF
    AbstractIn this paper, we propose a novel approach to cell image segmentation under severe noise conditions by combining kernel-based dynamic clustering and a genetic algorithm. Our method incorporates a priori knowledge about cell shape. That is, an elliptical cell contour model is introduced to describe the boundary of the cell. Our method consists of the following components: (1) obtain the gradient image; (2) use the gradient image to obtain points which possibly belong to cell boundaries; (3) adjust the parameters of the elliptical cell boundary model to match the cell contour using a genetic algorithm. The method is tested on images of noisy human thyroid and small intestine cells

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    Three-dimensional tumour microenvironment reconstruction and tumour-immune interactions' analysis

    Get PDF
    Tumours arise within complex 3D microenvironments, but the routine 2D analysis of tumours often underestimates the spatial heterogeneity. In this paper, we present a methodology to reconstruct and analyse 3D tumour models from routine clinical samples allowing 3D interactions to be analysed at cellular resolution. Our workflow involves cutting thin serial sections of tumours followed by labelling of cells using markers of interest. Serial sections are then scanned, and digital multiplexed data are created for computational reconstruction. Following spectral unmixing, a registration method of the consecutive images based on a pre-alignment, a parametric and a non-parametric image registration step is applied. For the segmentation of the cells, an ellipsoidal model is proposed and for the 3D reconstruction, a cubic interpolation method is used. The proposed 3D models allow us to identify specific interaction patterns that emerge as tumours develop, adapt and evolve within their host microenvironment. We applied our technique to map tumour-immune interactions of colorectal cancer and preliminary results suggest that 3D models better represent the tumor-immune cells interaction revealing mechanisms within the tumour microenvironment and its heterogeneity

    Assessing the Mass Transfer Coefficient in Jet Bioreactors with Classical Computer Vision Methods and Neural Networks Algorithms

    Get PDF
    Development of energy-efficient and high-performance bioreactors requires progress in methods for assessing the key parameters of the biosynthesis process. With a wide variety of approaches and methods for determining the phase contact area in gas–liquid flows, the question of obtaining its accurate quantitative estimation remains open. Particularly challenging are the issues of getting information about the mass transfer coefficients instantly, as well as the development of predictive capabilities for the implementation of effective flow control in continuous fermentation both on the laboratory and industrial scales. Motivated by the opportunity to explore the possibility of applying classical and non-classical computer vision methods to the results of high-precision video records of bubble flows obtained during the experiment in the bioreactor vessel, we obtained a number of results presented in the paper. Characteristics of the bioreactor’s bubble flow were estimated first by classical computer vision (CCV) methods including an elliptic regression approach for single bubble boundaries selection and clustering, image transformation through a set of filters and developing an algorithm for separation of the overlapping bubbles. The application of the developed method for the entire video filming makes it possible to obtain parameter distributions and set dropout thresholds in order to obtain better estimates due to averaging. The developed CCV methodology was also tested and verified on a collected and labeled manual dataset. An onwards deep neural network (NN) approach was also applied, for instance the segmentation task, and has demonstrated certain advantages in terms of high segmentation resolution, while the classical one tends to be more speedy. Thus, in the current manuscript both advantages and disadvantages of the classical computer vision method (CCV) and neural network approach (NN) are discussed based on evaluation of bubbles’ number and their area defined. An approach to mass transfer coefficient estimation methodology in virtue of obtained results is also represented

    Meta-KANSEI modeling with Valence-Arousal fMRI dataset of brain

    Get PDF
    Background: Traditional KANSEI methodology is an important tool in the field of psychology to comprehend the concepts and meanings; it mainly focusses on semantic differential methods. Valence-Arousal is regarded as a reflection of the KANSEI adjectives, which is the core concept in the theory of effective dimensions for brain recognition. From previous studies, it has been found that brain fMRI datasets can contain significant information related to Valence and Arousal. Methods: In this current work, a Valence-Arousal based meta-KANSEI modeling method is proposed to improve the traditional KANSEI presentation. Functional Magnetic Resonance Imaging (fMRI) was used to acquire the response dataset of Valence-Arousal of the brain in the amygdala and orbital frontal cortex respectively. In order to validate the feasibility of the proposed modeling method, the dataset was processed under dimension reduction by using Kernel Density Estimation (KDE) based segmentation and Mean Shift (MS) clustering. Furthermore, Affective Norm English Words (ANEW) by IAPS (International Affective Picture System) were used for comparison and analysis. The data sets from fMRI and ANEW under four KANSEI adjectives of angry, happy, sad and pleasant were processed by the Fuzzy C-Means (FCM) algorithm. Finally, a defined distance based on similarity computing was adopted for these two data sets. Results: The results illustrate that the proposed model is feasible and has better stability per the normal distribution plotting of the distance. The effectiveness of the experimental methods proposed in the current work was higher than in the literature. Conclusions: mean shift can be used to cluster and central points based meta-KANSEI model combining with the advantages of a variety of existing intelligent processing methods are expected to shift the KANSEI Engineering (KE) research into the medical imaging field

    Image Segmentation of Bacterial Cells in Biofilms

    Get PDF
    Bacterial biofilms are three-dimensional cell communities that live embedded in a self-produced extracellular matrix. Due to the protective properties of the dense coexistence of microorganisms, single bacteria inside the communities are hard to eradicate by antibacterial agents and bacteriophages. This increased resilience gives rise to severe problems in medical and technological settings. To fight the bacterial cells, an in-detail understanding of the underlying mechanisms of biofilm formation and development is required. Due to spatio-temporal variances in environmental conditions inside a single biofilm, the mechanisms can only be investigated by probing single-cells at different locations over time. Currently, the mechanistic information is primarily encoded in volumetric image data gathered with confocal fluorescence microscopy. To quantify features of the single-cell behaviour, single objects need to be detected. This identification of objects inside biofilm image data is called segmentation and is a key step for the understanding of the biological processes inside biofilms. In the first part of this work, a user-friendly computer program is presented which simplifies the analysis of bacterial biofilms. It provides a comprehensive set of tools to segment, analyse, and visualize fluorescent microscopy data without writing a single line of analysis code. This allows for faster feedback loops between experiment and analysis, and allows fast insights into the gathered data. The single-cell segmentation accuracy of a recent segmentation algorithm is discussed in detail. In this discussion, points for improvements are identified and a new optimized segmentation approach presented. The improved algorithm achieves superior segmentation accuracy on bacterial biofilms when compared to the current state-of-the-art algorithms. Finally, the possibility of deep learning-based end-to-end segmentation of biofilm data is investigated. A method for the quick generation of training data is presented and the results of two single-cell segmentation approaches for eukaryotic cells are adapted for the segmentation of bacterial biofilm segmentation.Bakterielle Biofilme sind drei-dimensionale Zellcluster, welche ihre eigene Matrix produzieren. Die selbst-produzierte Matrix bietet den Zellen einen gemeinschaftlichen Schutz vor äußeren Stressfaktoren. Diese Stressfaktoren können abiotischer Natur sein wie z.B. Temperatur- und Nährstoff\- schwankungen, oder aber auch biotische Faktoren wie z.B. Antibiotikabehandlung oder Bakteriophageninfektionen. Dies führt dazu, dass einzelne Zelle innerhalb der mikrobiologischen Gemeinschaften eine erhöhte Widerstandsfähigkeit aufweisen und eine große Herausforderung für Medizin und technische Anwendungen darstellen. Um Biofilme wirksam zu bekämpfen, muss man die dem Wachstum und Entwicklung zugrundeliegenden Mechanismen entschlüsseln. Aufgrund der hohen Zelldichte innerhalb der Gemeinschaften sind die Mechanismen nicht räumlich und zeitlich invariant, sondern hängen z.B. von Metabolit-, Nährstoff- und Sauerstoffgradienten ab. Daher ist es für die Beschreibung unabdingbar Beobachtungen auf Einzelzellebene durchzuführen. Für die nicht-invasive Untersuchung von einzelnen Zellen innerhalb eines Biofilms ist man auf konfokale Fluoreszenzmikroskopie angewiesen. Um aus den gesammelten, drei-dimensionalen Bilddaten Zelleigenschaften zu extrahieren, ist die Erkennung von den jeweiligen Zellen erforderlich. Besonders die digitale Rekonstruktion der Zellmorphologie spielt dabei eine große Rolle. Diese erhält man über die Segmentierung der Bilddaten. Dabei werden einzelne Bildelemente den abgebildeten Objekten zugeordnet. Damit lassen sich die einzelnen Objekte voneinander unterscheiden und deren Eigenschaften extrahieren. Im ersten Teil dieser Arbeit wird ein benutzerfreundliches Computerprogramm vorgestellt, welches die Segmentierung und Analyse von Fluoreszenzmikroskopiedaten wesentlich vereinfacht. Es stellt eine umfangreiche Auswahl an traditionellen Segmentieralgorithmen, Parameterberechnungen und Visualisierungsmöglichkeiten zur Verfügung. Alle Funktionen sind ohne Programmierkenntnisse zugänglich, sodass sie einer großen Gruppe von Benutzern zur Verfügung stehen. Die implementierten Funktionen ermöglichen es die Zeit zwischen durchgeführtem Experiment und vollendeter Datenanalyse signifikant zu verkürzen. Durch eine schnelle Abfolge von stetig angepassten Experimenten können in kurzer Zeit schnell wissenschaftliche Einblicke in Biofilme gewonnen werden.\\ Als Ergänzung zu den bestehenden Verfahren zur Einzelzellsegmentierung in Biofilmen, wird eine Verbesserung vorgestellt, welche die Genauigkeit von bisherigen Filter-basierten Algorithmen übertrifft und einen weiteren Schritt in Richtung von zeitlich und räumlich aufgelöster Einzelzellverfolgung innerhalb bakteriellen Biofilme darstellt. Abschließend wird die Möglichkeit der Anwendung von Deep Learning Algorithmen für die Segmentierung in Biofilmen evaluiert. Dazu wird eine Methode vorgestellt welche den Annotationsaufwand von Trainingsdaten im Vergleich zu einer vollständig manuellen Annotation drastisch verkürzt. Die erstellten Daten werden für das Training von Algorithmen eingesetzt und die Genauigkeit der Segmentierung an experimentellen Daten untersucht

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Augmented breast tumor classification by perfusion analysis

    Get PDF
    Magnetic resonance and computed tomography imaging aid in the diagnosis and analysis of pathologic conditions. Blood flow, or perfusion, through a region of tissue can be computed from a time series of contrast-enhanced images. Perfusion is an important set of physiological parameters that reflect angiogenesis. In cancer, heightened angiogenesis is a key process in the growth and spread of tumorous masses. An automatic classification technique using recovered perfusion may prove to be a highly accurate diagnostic tool. Such a classification system would supplement existing histopathological tests, and help physicians to choose the most optimal treatment protocol. Perfusion is obtained through deconvolution of signal intensity series and a pharmacokinetic model. However, many computational problems complicate the accurate-consistent recovery of perfusion. The high time-resolution acquisition of images decreases signal-to-noise, producing poor deconvolution solutions. The delivery of contrast agent as a function of time must also be determined or sampled before deconvolution can proceed. Some regions of the body, such as the brain, provide a nearby artery to serve as this arterial input function. Poor estimates can lead to an over or under estimation of perfusion. Breast tissue is an example of one tissue region where a clearly defined artery is not present. This proposes a new method of using recovered perfusion and spatial information in an automated classifier. This classifier grades suspected lesions as benign or malignant. This method can be integrated into a computer-aided diagnostic system to enhance the value of medical imagery

    Computing Interpretable Representations of Cell Morphodynamics

    Get PDF
    Shape changes (morphodynamics) are one of the principal ways cells interact with their environments and perform key intrinsic behaviours like division. These dynamics arise from a myriad of complex signalling pathways that often organise with emergent simplicity to carry out critical functions including predation, collaboration and migration. A powerful method for analysis can therefore be to quantify this emergent structure, bypassing the low-level complexity. Enormous image datasets are now available to mine. However, it can be difficult to uncover interpretable representations of the global organisation of these heterogeneous dynamic processes. Here, such representations were developed for interpreting morphodynamics in two key areas: mode of action (MoA) comparison for drug discovery (developed using the economically devastating Asian soybean rust crop pathogen) and 3D migration of immune system T cells through extracellular matrices (ECMs). For MoA comparison, population development over a 2D space of shapes (morphospace) was described using two models with condition-dependent parameters: a top-down model of diffusive development over Waddington-type landscapes, and a bottom-up model of tip growth. A variety of landscapes were discovered, describing phenotype transitions during growth, and possible perturbations in the tip growth machinery that cause this variation were identified. For interpreting T cell migration, a new 3D shape descriptor that incorporates key polarisation information was developed, revealing low-dimensionality of shape, and the distinct morphodynamics of run-and-stop modes that emerge at minute timescales were mapped. Periodically oscillating morphodynamics that include retrograde deformation flows were found to underlie active translocation (run mode). Overall, it was found that highly interpretable representations could be uncovered while still leveraging the enormous discovery power of deep learning algorithms. The results show that whole-cell morphodynamics can be a convenient and powerful place to search for structure, with potentially life-saving applications in medicine and biocide discovery as well as immunotherapeutics.Open Acces
    • …
    corecore