13 research outputs found

    Tensor approximation in visualization and graphics

    Full text link
    In this course, we will introduce the basic concepts of tensor approximation (TA) – a higher-order generalization of the SVD and PCA methods – as well as its applications to visual data representation, analysis and visualization, and bring the TA framework closer to visualization and computer graphics researchers and practitioners. The course will cover the theoretical background of TA methods, their properties and how to compute them, as well as practical applications of TA methods in visualization and computer graphics contexts. In a first theoretical part, the attendees will be instructed on the necessary mathematical background of TA methods to learn the basics skills of using and applying these new tools in the context of the representation of large multidimensional visual data. Specific and very noteworthy features of the TA framework are highlighted which can effectively be exploited for spatio-temporal multidimensional data representation and visualization purposes. In two application oriented sessions, compact TA data representation in scientific visualization and computer graphics as well as decomposition and reconstruction algorithms will be demonstrated. At the end of the course, the participants will have a good basic knowledge of TA methods along with a practical understanding of its potential application in visualization and graphics related projects

    Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence

    Get PDF
    Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization ('neural recording'). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications

    Zeb2 is essential for Schwann cell differentiation, myelination and nerve repair

    Get PDF
    Schwann cell development and peripheral nerve myelination require the serial expression of transcriptional activators, such as Sox10, Oct6 (also called Scip or Pou3f1) and Krox20 (also called Egr2). Here we show that transcriptional repression, mediated by the zinc-finger protein Zeb2 (also known as Sip1), is essential for differentiation and myelination. Mice lacking Zeb2 in Schwann cells develop a severe peripheral neuropathy, caused by failure of axonal sorting and virtual absence of myelin membranes. Zeb2-deficient Schwann cells continuously express repressors of lineage progression. Moreover, genes for negative regulators of maturation such as Sox2 and Ednrb emerge as Zeb2 target genes, supporting its function as an inhibitor of inhibitors in myelination control. When Zeb2 is deleted in adult mice, Schwann cells readily dedifferentiate following peripheral nerve injury and become repair cells. However, nerve regeneration and remyelination are both perturbed, demonstrating that Zeb2, although undetectable in adult Schwann cells, has a latent function throughout life

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    Tensor approximation properties for multiresolution and multiscale volume visualization

    Full text link
    Interactive visualization and analysis of large and complex volume data is still a big challenge. Compression-domain volume rendering methods have shown that mathematical tools to represent and com- press large data are very successful. We use a new framework that is widely used for data approximation and tensor approximation (TA). Specific properties of the TA bases are elaborated in the context of multiresolution and multiscale volume visualization

    Analysis of tensor approximation for compression-domain volume visualization

    Full text link
    As modern high-resolution imaging devices allow to acquire increasingly large and complex volume data sets, their effective and compact representation for visualization becomes a challenging task. The Tucker decomposition has already confirmed higher-order tensor approximation (TA) as a viable technique for compressed volume representation; however, alternative decomposition approaches exist. In this work, we review the main TA models proposed in the literature on multiway data analysis and study their application in a visualization context, where reconstruction performance is emphasized along with reduced data representation costs. Progressive and selective detail reconstruction is a main goal for such representations and can efficiently be achieved by truncating an existing decomposition. To this end, we explore alternative incremental variations of the CANDECOMP/PARAFAC and Tucker models. We give theoretical time and space complexity estimates for every discussed approach and variant. Additionally, their empirical decomposition and reconstruction times and approximation quality are tested in both C++ and MATLAB implementations. Several scanned real-life exemplar volumes are used varying data sizes, initialization methods, degree of compression and truncation. As a result of this, we demonstrate the superiority of the Tucker model for most visualization purposes, while canonical-based models offer benefits only in limited situations

    How can we teach EBM in clinical practice?? An analysis of barriers to implementation of on-the-job EBM teaching and learning

    No full text
    Introduction: Evidence-based medicine (EBM) improves the quality of health care. Courses on how to teach EBM in practice are available, but knowledge does not automatically imply its application in teaching. We aimed to identify and compare barriers and facilitators for teaching EBM in clinical practice in various European countries. Methods: A questionnaire was constructed listing potential barriers and facilitators for EBM teaching in clinical practice. Answers were reported on a 7-point Likert scale ranging from not at all being a barrier to being an insurmountable barrier. Results: The questionnaire was completed by 120 clinical EBM teachers from 11 countries. Lack of time was the strongest barrier for teaching EBM in practice (median 5). Moderate barriers were the lack of requirements for EBM skills and a pyramid hierarchy in health care management structure (median 4). In Germany, Hungary and Poland, reading and understanding articles in English was a higher barrier than in the other countries. Conclusion: Incorporation of teaching EBM in practice faces several barriers to implementation. Teaching EBM in clinical settings is most successful where EBM principles are culturally embedded and form part and parcel of everyday clinical decisions and medical practice
    corecore