13,239 research outputs found

    Joint co-clustering: co-clustering of genomic and clinical bioimaging data

    Get PDF
    AbstractFor better understanding the genetic mechanisms underlying clinical observations, and better defining a group of potential candidates for protein-family-inhibiting therapy, it is interesting to determine the correlations between genomic, clinical data and data coming from high resolution and fluorescent microscopy. We introduce a computational method, called joint co-clustering, that can find co-clusters or groups of genes, bioimaging parameters and clinical traits that are believed to be closely related to each other based on the given empirical information. As bioimaging parameters, we quantify the expression of growth factor receptor EGFR/erb-B family in non-small cell lung carcinoma (NSCLC) through a fully-automated computer-aided analysis approach. This immunohistochemical analysis is usually performed by pathologists via visual inspection of tissue samples images. Our fully-automated techniques streamlines this error-prone and time-consuming process, thereby facilitating analysis and diagnosis. Experimental results for several real-life datasets demonstrate the high quantitative precision of our approach. The joint co-clustering method was tested with the receptor EGFR/erb-B family data on non-small cell lung carcinoma (NSCLC) tissue and identified statistically significant co-clusters of genes, receptor protein expression and clinical traits. The validation of our results with the literature suggest that the proposed method can provide biologically meaningful co-clusters of genes and traits and that it is a very promising approach to analyse large-scale biological data and to study multi-factorial genetic pathologies through their genetic alterations

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    A modelling approach towards Epidermal homoeostasis control

    Full text link
    In order to grasp the features arising from cellular discreteness and individuality, in large parts of cell tissue modelling agent-based models are favoured. The subclass of off-lattice models allows for a physical motivation of the intercellular interaction rules. We apply an improved version of a previously introduced off-lattice agent-based model to the steady-state flow equilibrium of skin. The dynamics of cells is determined by conservative and drag forces,supplemented with delta-correlated random forces. Cellular adjacency is detected by a weighted Delaunay triangulation. The cell cycle time of keratinocytes is controlled by a diffusible substance provided by the dermis. Its concentration is calculated from a diffusion equation with time-dependent boundary conditions and varying diffusion coefficients. The dynamics of a nutrient is also taken into account by a reaction-diffusion equation. It turns out that the analysed control mechanism suffices to explain several characteristics of epidermal homoeostasis formation. In addition, we examine the question of how {\em in silico} melanoma with decreased basal adhesion manage to persist within the steady-state flow-equilibrium of the skin.Interestingly, even for melanocyte cell cycle times being substantially shorter than for keratinocytes, tiny stochastic effects can lead to completely different outcomes. The results demonstrate that the understanding of initial states of tumour growth can profit significantly from the application of off-lattice agent-based models in computer simulations.Comment: 23 pages, 7 figures, 1 table; version that is to appear in Journal of Theoretical Biolog

    Review of the Synergies Between Computational Modeling and Experimental Characterization of Materials Across Length Scales

    Full text link
    With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends where predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure-properties relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanics community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to "simply" support experimental work. This is illustrated by examples from several application areas on structural materials. This manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.Comment: 25 pages, 11 figures, review article accepted for publication in J. Mater. Sc

    Efficient Bayesian-based Multi-View Deconvolution

    Full text link
    Light sheet fluorescence microscopy is able to image large specimen with high resolution by imaging the sam- ples from multiple angles. Multi-view deconvolution can significantly improve the resolution and contrast of the images, but its application has been limited due to the large size of the datasets. Here we present a Bayesian- based derivation of multi-view deconvolution that drastically improves the convergence time and provide a fast implementation utilizing graphics hardware.Comment: 48 pages, 20 figures, 1 table, under review at Nature Method

    An Affordable Portable Obstetric Ultrasound Simulator for Synchronous and Asynchronous Scan Training

    Get PDF
    The increasing use of Point of Care (POC) ultrasound presents a challenge in providing efficient training to new POC ultrasound users. In response to this need, we have developed an affordable, compact, laptop-based obstetric ultrasound training simulator. It offers freehand ultrasound scan on an abdomen-sized scan surface with a 5 degrees of freedom sham transducer and utilizes 3D ultrasound image volumes as training material. On the simulator user interface is rendered a virtual torso, whose body surface models the abdomen of a particular pregnant scan subject. A virtual transducer scans the virtual torso, by following the sham transducer movements on the scan surface. The obstetric ultrasound training is self-paced and guided by the simulator using a set of tasks, which are focused on three broad areas, referred to as modules: 1) medical ultrasound basics, 2) orientation to obstetric space, and 3) fetal biometry. A learner completes the scan training through the following three steps: (i) watching demonstration videos, (ii) practicing scan skills by sequentially completing the tasks in Modules 2 and 3, with scan evaluation feedback and help functions available, and (iii) a final scan exercise on new image volumes for assessing the acquired competency. After each training task has been completed, the simulator evaluates whether the task has been carried out correctly or not, by comparing anatomical landmarks identified and/or measured by the learner to reference landmark bounds created by algorithms, or pre-inserted by experienced sonographers. Based on the simulator, an ultrasound E-training system has been developed for the medical practitioners for whom ultrasound training is not accessible at local level. The system, composed of a dedicated server and multiple networked simulators, provides synchronous and asynchronous training modes, and is able to operate with a very low bit rate. The synchronous (or group-learning) mode allows all training participants to observe the same 2D image in real-time, such as a demonstration by an instructor or scan ability of a chosen learner. The synchronization of 2D images on the different simulators is achieved by directly transmitting the position and orientation of the sham transducer, rather than the ultrasound image, and results in a system performance independent of network bandwidth. The asynchronous (or self-learning) mode is described in the previous paragraph. However, the E-training system allows all training participants to stay networked to communicate with each other via text channel. To verify the simulator performance and training efficacy, we conducted several performance experiments and clinical evaluations. The performance experiment results indicated that the simulator was able to generate greater than 30 2D ultrasound images per second with acceptable image quality on medium-priced computers. In our initial experiment investigating the simulator training capability and feasibility, three experienced sonographers individually scanned two image volumes on the simulator. They agreed that the simulated images and the scan experience were adequately realistic for ultrasound training; the training procedure followed standard obstetric ultrasound protocol. They further noted that the simulator had the potential for becoming a good supplemental training tool for medical students and resident doctors. A clinic study investigating the simulator training efficacy was integrated into the clerkship program of the Department of Obstetrics and Gynecology, University of Massachusetts Memorial Medical Center. A total of 24 3rd year medical students were recruited and each of them was directed to scan six image volumes on the simulator in two 2.5-hour sessions. The study results showed that the successful scan times for the training tasks significantly decreased as the training progressed. A post-training survey answered by the students found that they considered the simulator-based training useful and suitable for medical students and resident doctors. The experiment to validate the performance of the E-training system showed that the average transmission bit rate was approximately 3-4 kB/s; the data loss was less than 1% and no loss of 2D images was visually detected. The results also showed that the 2D images on all networked simulators could be considered to be synchronous even though inter-continental communication existed

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Towards an automated virtual slide screening: theoretical considerations and practical experiences of automated tissue-based virtual diagnosis to be implemented in the Internet

    Get PDF
    AIMS: To develop and implement an automated virtual slide screening system that distinguishes normal histological findings and several tissue – based crude (texture – based) diagnoses. THEORETICAL CONSIDERATIONS: Virtual slide technology has to handle and transfer images of GB Bytes in size. The performance of tissue based diagnosis can be separated into a) a sampling procedure to allocate the slide area containing the most significant diagnostic information, and b) the evaluation of the diagnosis obtained from the information present in the selected area. Nyquist's theorem that is broadly applied in acoustics, can also serve for quality assurance in image information analysis, especially to preset the accuracy of sampling. Texture – based diagnosis can be performed with recursive formulas that do not require a detailed segmentation procedure. The obtained results will then be transferred into a "self-learning" discrimination system that adjusts itself to changes of image parameters such as brightness, shading, or contrast. METHODS: Non-overlapping compartments of the original virtual slide (image) will be chosen at random and according to Nyquist's theorem (predefined error-rate). The compartments will be standardized by local filter operations, and are subject for texture analysis. The texture analysis is performed on the basis of a recursive formula that computes the median gray value and the local noise distribution. The computations will be performed at different magnifications that are adjusted to the most frequently used objectives (*2, *4.5, *10, *20, *40). The obtained data are statistically analyzed in a hierarchical sequence, and in relation to the clinical significance of the diagnosis. RESULTS: The system has been tested with a total of 896 lung cancer cases that include the diagnoses groups: cohort (1) normal lung – cancer; cancer subdivided: cohort (2) small cell lung cancer – non small cell lung cancer; non small cell lung cancer subdivided: cohort (3) squamous cell carcinoma – adenocarcinoma – large cell carcinoma. The system can classify all diagnoses of the cohorts (1) and (2) correctly in 100%, those of cohort (3) in more than 95%. The percentage of the selected area can be limited to only 10% of the original image without any increased error rate. CONCLUSION: The developed system is a fast and reliable procedure to fulfill all requirements for an automated "pre-screening" of virtual slides in lung pathology
    corecore