27 research outputs found

    Leukemia Gene Atlas – A Public Platform for Integrative Exploration of Genome-Wide Molecular Data

    Get PDF
    Leukemias are exceptionally well studied at the molecular level and a wealth of high-throughput data has been published. But further utilization of these data by researchers is severely hampered by the lack of accessible integrative tools for viewing and analysis. We developed the Leukemia Gene Atlas (LGA) as a public platform designed to support research and analysis of diverse genomic data published in the field of leukemia. With respect to leukemia research, the LGA is a unique resource with comprehensive search and browse functions. It provides extensive analysis and visualization tools for various types of molecular data. Currently, its database contains data from more than 5,800 leukemia and hematopoiesis samples generated by microarray gene expression, DNA methylation, SNP and next generation sequencing analyses. The LGA allows easy retrieval of large published data sets and thus helps to avoid redundant investigations. It is accessible at www.leukemia-gene-atlas.org

    Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Visualization of DNA microarray data in two or three dimensional spaces is an important exploratory analysis step in order to detect quality issues or to generate new hypotheses. Principal Component Analysis (PCA) is a widely used linear method to define the mapping between the high-dimensional data and its low-dimensional representation. During the last decade, many new nonlinear methods for dimension reduction have been proposed, but it is still unclear how well these methods capture the underlying structure of microarray gene expression data. In this study, we assessed the performance of the PCA approach and of six nonlinear dimension reduction methods, namely Kernel PCA, Locally Linear Embedding, Isomap, Diffusion Maps, Laplacian Eigenmaps and Maximum Variance Unfolding, in terms of visualization of microarray data.</p> <p>Results</p> <p>A systematic benchmark, consisting of Support Vector Machine classification, cluster validation and noise evaluations was applied to ten microarray and several simulated datasets. Significant differences between PCA and most of the nonlinear methods were observed in two and three dimensional target spaces. With an increasing number of dimensions and an increasing number of differentially expressed genes, all methods showed similar performance. PCA and Diffusion Maps responded less sensitive to noise than the other nonlinear methods.</p> <p>Conclusions</p> <p>Locally Linear Embedding and Isomap showed a superior performance on all datasets. In very low-dimensional representations and with few differentially expressed genes, these two methods preserve more of the underlying structure of the data than PCA, and thus are favorable alternatives for the visualization of microarray data.</p

    Tolerance of Radial Basis Functions against Stuck-At-Faults

    No full text
    Neural networks are intended to be used in future nanoelectronic systems since neural architectures seem to be robust against malfunctioning elements and noise in their weights. In this paper we analyze the fault-tolerance of Radial Basis Function networks to StuckAt -Faults at the trained weights and at the output of neurons. Moreover, we determine upper bounds on the mean square error arising from these faults

    Design-space exploration of ultra-low power CMOS logic gates in a 28 nm FD-SOI technology

    No full text
    Vohrmann M, Geisler P, Jungeblut T, Ruckert U. Design-space exploration of ultra-low power CMOS logic gates in a 28 nm FD-SOI technology. In: 2017 European Conference on Circuit Theory and Design (ECCTD). IEEE; 2017

    Evaluation of interconnect fabrics for an embedded MPSoC in 28 nm FD-SOI

    Get PDF
    Embedded many-core architectures contain dozens to hundreds of CPU cores that are connected via a highly scalable NoC interconnect. Our Multiprocessor-System-on-Chip CoreVAMPSoC combines the advantages of tightly coupled bus-based communication with the scalability of NoC approaches by adding a CPU cluster as an additional level of hierarchy. In this work, we analyze different cluster interconnect implementations with 8 to 32 CPUs and compare them in terms of resource requirements and performance to hierarchical NoCs approaches. Using 28nm FD-SOI technology the area requirement for 32 CPUs and AXI crossbar is 5.59mm2 including 23.61% for the interconnect at a clock frequency of 830 MHz. In comparison, a hierarchical MPSoC with 4 CPU cluster and 8 CPUs in each cluster requires only 4.83mm2 including 11.61% for the interconnect. To evaluate the performance, we use a compiler for streaming applications to map programs to the different MPSoC configurations. We use this approach for a design-space exploration to find the most efficient architecture and partitioning for an application

    Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion

    No full text
    This work presents the novel multi-modal Variational Autoencoder approach M2VAE which is derived from the complete marginal joint log-likelihood. This allows the end-to-end training of Bayesian information fusion on raw data for all subsets of a sensor setup. Furthermore, we introduce the concept of in-place fusion-applicable to distributed sensing-where latent embeddings of observations need to be fused with new data. To facilitate in-place fusion even on raw data, we introduced the concept of a re-encoding loss that stabilizes the decoding and makes visualization of latent statistics possible. We also show that the M2VAE finds a coherent latent embedding, such that a single naïve Bayes classifier performs equally well on all permutations of a bi-modal Mixture-of-Gaussians signal. Finally, we show that our approach outperforms current VAE approaches on a bi-modal MNIST fashion-MNIST data set and works sufficiently well as a preprocessing on a tri-modal simulated camera LiDAR data set from the Gazebo simulator.</p

    Multi-modal generative models for learning epistemic active sensing

    No full text
    We present a novel approach of multi-modal deep generative models and apply this to coordinated heterogeneous multi-agent active sensing. A major approach to achieve this objective is to train a multi-modal variational Auto Encoder (\mathrm{M}^{2} VAE) that integrates the information of different sensor modalities into a joint latent representation. Furthermore, we derive an objective from the M2 VAE that enables the maximization of the evidence lower bound via selection of sensor modalities. Using this approach as a direct reward signal to a multi-modal and multi-agent deep reinforcement learning setup leads intuitively to an epistemic active sensing behavior that coordinately resolves the ambiguity of observations.</p
    corecore