496 research outputs found

    CELL PATTERN CLASSIFICATION OF INDIRECT IMMUNOFLUORESCENCE IMAGES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    The Human Connectome Project: A retrospective

    Get PDF
    The Human Connectome Project (HCP) was launched in 2010 as an ambitious effort to accelerate advances in human neuroimaging, particularly for measures of brain connectivity; apply these advances to study a large number of healthy young adults; and freely share the data and tools with the scientific community. NIH awarded grants to two consortia; this retrospective focuses on the “WU-Minn-Ox” HCP consortium centered at Washington University, the University of Minnesota, and University of Oxford. In just over 6 years, the WU-Minn-Ox consortium succeeded in its core objectives by: 1) improving MR scanner hardware, pulse sequence design, and image reconstruction methods, 2) acquiring and analyzing multimodal MRI and MEG data of unprecedented quality together with behavioral measures from more than 1100 HCP participants, and 3) freely sharing the data (via the ConnectomeDB database) and associated analysis and visualization tools. To date, more than 27 Petabytes of data have been shared, and 1538 papers acknowledging HCP data use have been published. The “HCP-style” neuroimaging paradigm has emerged as a set of best-practice strategies for optimizing data acquisition and analysis. This article reviews the history of the HCP, including comments on key events and decisions associated with major project components. We discuss several scientific advances using HCP data, including improved cortical parcellations, analyses of connectivity based on functional and diffusion MRI, and analyses of brain-behavior relationships. We also touch upon our efforts to develop and share a variety of associated data processing and analysis tools along with detailed documentation, tutorials, and an educational course to train the next generation of neuroimagers. We conclude with a look forward at opportunities and challenges facing the human neuroimaging field from the perspective of the HCP consortium

    Finding correlations and independences in omics data

    Get PDF
    Biological studies across all omics fields generate vast amounts of data. To understand these complex data, biologically motivated data mining techniques are indispensable. Evaluation of the high-throughput measurements usually relies on the identification of underlying signals as well as shared or outstanding characteristics. Therein, methods have been developed to recover source signals of present datasets, reveal objects which are more similar to each other than to other objects as well as to detect observations which are in contrast to the background dataset. Biological problems got individually addressed by using solutions from computer science according to their needs. The study of protein-protein interactions (interactome) focuses on the identification of clusters, the sub-graphs of graphs: A parameter-free graph clustering algorithm was developed, which was based on the concept of graph compression, in order to find sets of highly interlinked proteins sharing similar characteristics. The study of lipids (lipidome) calls for co-regulation analyses: To reveal those lipids similarly responding to biological factors, partial correlations were generated with differential Gaussian Graphical Models while accounting for solely disease-specific correlations. The study on single cell level (cytomics) aims to understand cellular systems often with the help of microscopy techniques: A novel noise robust source separation technique allowed to reliably extract independent components from microscopy images describing protein behaviors. The study of peptides (peptidomics) often requires the detection outstanding observations: By assessing regularities in the data set, an outlier detection algorithm was implemented based on compression efficacy of independent components of the dataset. All developed algorithms had to fulfill most diverse constraints in each omics field, but were met with methods derived from standard correlation and dependency analyses

    Developing retinal cell therapy: cones and cone-like cells in transplantation and development

    Get PDF
    Retinal cell replacement therapy aims to restore vision in retinal degenerative diseases by replacing dead photoreceptors. Injecting GFP+ rod photoreceptor precursors into the subretinal space of recipient mice leads to GFP+ cells being seen in the recipient retina. In models of retinal degeneration, replacement of proteins lacking in the recipient, as well as functional improvement, has been seen. This thesis aimed to extend this work to use cone photoreceptors, which are more important for human vision. The Chrnb4.eGFP model was selected as a source of cones, as well as the Nrl-/- and Nr2e3rd7/rd7 models which generate an increased number of cone-like cells. Cells were injected into the subretinal space of several different mouse recipients, and a number of retinal degenerative models representing a range of cone to rod ratios and functionalities. GFP+ photoreceptors, including unambiguous morphology and immunohistochemical markers were seen in recipient retinas after transplantation into all tested recipient types. The highest numbers occurred in Nrl-deficient and Prph2 mutant mice. The majority of GFP+ cells resembled rod photoreceptors in morphology, and did not express cone-specific markers, except in Nrl-deficient recipients. Evidence from these and other experiments showed that these results were most likely due not to cell integration but instead to the transfer of material including GFP from injected cells to existing recipient photoreceptors. To investigate functional outcomes of transplantation, electroretinography and multi-electrode array (MEA) techniques were used. MEA data showed no clear improvement in light response in treated retinas. Time-lapse imaging of explanted early postnatal retinal tissue using multi-photon microscopy was used to investigate the migratory behaviour of developing cone photoreceptor precursors around the ages of transplantation. This revealed a cyclical pattern of migration similar to interkinetic nuclear migration, with slow basal and rapid apical movements. Pharmacological intervention implicated the dynein/kinesin motor proteins in the apical movement seen

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Human-controllable and structured deep generative models

    Get PDF
    Deep generative models are a class of probabilistic models that attempts to learn the underlying data distribution. These models are usually trained in an unsupervised way and thus, do not require any labels. Generative models such as Variational Autoencoders and Generative Adversarial Networks have made astounding progress over the last years. These models have several benefits: eased sampling and evaluation, efficient learning of low-dimensional representations for downstream tasks, and better understanding through interpretable representations. However, even though the quality of these models has improved immensely, the ability to control their style and structure is limited. Structured and human-controllable representations of generative models are essential for human-machine interaction and other applications, including fairness, creativity, and entertainment. This thesis investigates learning human-controllable and structured representations with deep generative models. In particular, we focus on generative modelling of 2D images. For the first part, we focus on learning clustered representations. We propose semi-parametric hierarchical variational autoencoders to estimate the intensity of facial action units. The semi-parametric model forms a hybrid generative-discriminative model and leverages both parametric Variational Autoencoder and non-parametric Gaussian Process autoencoder. We show superior performance in comparison with existing facial action unit estimation approaches. Based on the results and analysis of the learned representation, we focus on learning Mixture-of-Gaussians representations in an autoencoding framework. We deviate from the conventional autoencoding framework and consider a regularized objective with the Cauchy-Schwarz divergence. The Cauchy-Schwarz divergence allows a closed-form solution for Mixture-of-Gaussian distributions and, thus, efficiently optimizing the autoencoding objective. We show that our model outperforms existing Variational Autoencoders in density estimation, clustering, and semi-supervised facial action detection. We focus on learning disentangled representations for conditional generation and fair facial attribute classification for the second part. Conditional image generation relies on the accessibility to large-scale annotated datasets. Nevertheless, the geometry of visual objects, such as in faces, cannot be learned implicitly and deteriorate image fidelity. We propose incorporating facial landmarks with a statistical shape model and a differentiable piecewise affine transformation to separate the representation for appearance and shape. The goal of incorporating facial landmarks is that generation is controlled and can separate different appearances and geometries. In our last work, we use weak supervision for disentangling groups of variations. Works on learning disentangled representation have been done in an unsupervised fashion. However, recent works have shown that learning disentangled representations is not identifiable without any inductive biases. Since then, there has been a shift towards weakly-supervised disentanglement learning. We investigate using regularization based on the Kullback-Leiber divergence to disentangle groups of variations. The goal is to have consistent and separated subspaces for different groups, e.g., for content-style learning. Our evaluation shows increased disentanglement abilities and competitive performance for image clustering and fair facial attribute classification with weak supervision compared to supervised and semi-supervised approaches.Open Acces

    FIAS Scientific Report 2011

    Get PDF
    In the year 2010 the Frankfurt Institute for Advanced Studies has successfully continued to follow its agenda to pursue theoretical research in the natural sciences. As stipulated in its charter, FIAS closely collaborates with extramural research institutions, like the Max Planck Institute for Brain Research in Frankfurt and the GSI Helmholtz Center for Heavy Ion Research, Darmstadt and with research groups at the science departments of Goethe University. The institute also engages in the training of young researchers and the education of doctoral students. This Annual Report documents how these goals have been pursued in the year 2010. Notable events in the scientific life of the Institute will be presented, e.g., teaching activities in the framework of the Frankfurt International Graduate School for Science (FIGSS), colloquium schedules, conferences organized by FIAS, and a full bibliography of publications by authors affiliated with FIAS. The main part of the Report consists of short one-page summaries describing the scientific progress reached in individual research projects in the year 2010..
    corecore