101 research outputs found

    Deep Adversarial Transition Learning using Cross-Grafted Generative Stacks

    Full text link
    Current deep domain adaptation methods used in computer vision have mainly focused on learning discriminative and domain-invariant features across different domains. In this paper, we present a novel "deep adversarial transition learning" (DATL) framework that bridges the domain gap by projecting the source and target domains into intermediate, transitional spaces through the employment of adjustable, cross-grafted generative network stacks and effective adversarial learning between transitions. Specifically, we construct variational auto-encoders (VAE) for the two domains, and form bidirectional transitions by cross-grafting the VAEs' decoder stacks. Furthermore, generative adversarial networks (GAN) are employed for domain adaptation, mapping the target domain data to the known label space of the source domain. The overall adaptation process hence consists of three phases: feature representation learning by VAEs, transitions generation, and transitions alignment by GANs. Experimental results demonstrate that our method outperforms the state-of-the art on a number of unsupervised domain adaptation benchmarks.Comment: 12 pages, 8 figure

    Data Augmentation with norm-VAE and Selective Pseudo-Labelling for Unsupervised Domain Adaptation

    Get PDF
    We address the Unsupervised Domain Adaptation (UDA) problem in image classification from a new perspective. In contrast to most existing works which either align the data distributions or learn domain-invariant features, we directly learn a unified classifier for both the source and target domains in the high-dimensional homogeneous feature space without explicit domain alignment. To this end, we employ the effective Selective Pseudo-Labelling (SPL) technique to take advantage of the unlabelled samples in the target domain. Surprisingly, data distribution discrepancy across the source and target domains can be well handled by a computationally simple classifier (e.g., a shallow Multi-Layer Perceptron) trained in the original feature space. Besides, we propose a novel generative model norm-AE to generate synthetic features for the target domain as a data augmentation strategy to enhance the classifier training. Experimental results on several benchmark datasets demonstrate the pseudo-labelling strategy itself can lead to comparable performance to many state-of-the-art methods whilst the use of norm-AE for feature augmentation can further improve the performance in most cases. As a result, our proposed methods (i.e. naiveSPL and norm-AE-SPL) can achieve comparable performance with state-of-the-art methods with the average accuracy of 93.4% and 90.4% on Office-Caltech and ImageCLEF-DA datasets, and achieve competitive performance on Digits, Office31 and Office-Home datasets with the average accuracy of 97.2%, 87.6% and 68.6% respectively

    Data Augmentation with norm-VAE for Unsupervised Domain Adaptation

    Get PDF
    We address the Unsupervised Domain Adaptation (UDA) problem in image classification from a new perspective. In contrast to most existing works which either align the data distributions or learn domain-invariant features, we directly learn a unified classifier for both domains within a high-dimensional homogeneous feature space without explicit domain adaptation. To this end, we employ the effective Selective Pseudo-Labelling (SPL) techniques to take advantage of the unlabelled samples in the target domain. Surprisingly, data distribution discrepancy across the source and target domains can be well handled by a computationally simple classifier (e.g., a shallow Multi-Layer Perceptron) trained in the original feature space. Besides, we propose a novel generative model norm-VAE to generate synthetic features for the target domain as a data augmentation strategy to enhance classifier training. Experimental results on several benchmark datasets demonstrate the pseudo-labelling strategy itself can lead to comparable performance to many state-of-the-art methods whilst the use of norm-VAE for feature augmentation can further improve the performance in most cases. As a result, our proposed methods (i.e. naive-SPL and norm-VAE-SPL) can achieve new state-of-the-art performance with the average accuracy of 93.4% and 90.4% on Office-Caltech and ImageCLEF-DA datasets, and comparable performance on Digits, Office31 and Office-Home datasets with the average accuracy of 97.2%, 87.6% and 67.9% respectively

    Chronic pain detection from resting-state raw EEG signals using improved feature selection

    Full text link
    We present an automatic approach that works on resting-state raw EEG data for chronic pain detection. A new feature selection algorithm - modified Sequential Floating Forward Selection (mSFFS) - is proposed. The improved feature selection scheme is rather compact but displays better class separability as indicated by the Bhattacharyya distance measures and better visualization results. It also outperforms selections generated by other benchmark methods, boosting the test accuracy to 97.5% and yielding a test accuracy of 81.4% on an external dataset that contains different types of chronic painComment: 9 pages, 4 figures, journal submissio

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    WNT-DEPENDENT REGENERATIVE FUNCTION IS INDUCED IN LEUKEMIA-INITIATING AC133BRIGHT CELLS

    Get PDF
    The Cancer Stem Cell model supported the notion that leukemia was initiated and maintained in vivo by a small fraction of leukemia-initiating cells (LICs). Previous studies have suggested the involvement of Wnt signaling pathway in Acute Myeloid Leukemia (AML) by the ability to sustain the development of LICs. A novel hematopoietic stem and progenitor cell marker, monoclonal antibody AC133, recognizes the CD34bright CD38- subset of human acute myeloid leukemia cells, suggesting that it may be an early marker for the LICs. During the first part of my phD program we previously evaluated the ability of leukemic AC133+ fraction, to perform engraftment following to xenotransplantation in immunodeficient mouse model Rag2-/-\u3b3c-/-. The results showed that the surface marker AC133 is able to enrich for the cell fraction that contains the LICs. In consideration of our previously reported data, derived from the expression profiling analysis performed in normal (n=10) and leukemic (n=33) human long-term reconstituting AC133+ cells, we revealed that the ligand-dependent Wnt signaling is induced in AML through a diffuse expression and release of WNT10B, a hematopoietic stem cells regenerative-associated molecule. In situ detection performed on bone marrow biopsies of AML patients, showed the activation of the Wnt pathway, through the concomitant presence of the ligand WNT10B and of the active dephosphorylated \u3b2-catenin form, suggesting an autocrine / paracrine-type ligand-dependent activation mechanism. In consideration of the link between hematopoietic regeneration and developmental signaling, we transplanted primary AC133+ AML A46 cells into developing zebrafish. This biosensor model revealed the formation of ectopic structures by activation of dorsal organizer markers that act downstream of the Wnt pathway. These results suggested that the misappropriating Wnt associated functions can promote pathological stem cell-like regeneration responsiveness. The analyses performed in situ retained information on the cellular localization, enabling determination of the activity status of individual cells and allowing the tumor environment view. Taking this issue into consideration, during the second part of my phD program, I set up the application of a new in situ method for localized detection and genotyping of individual transcripts directly in cells and tissues. The mRNA in situ detection technique is based on padlock probes ligation and target priming rolling circle amplification allowing the single nucleotide resolution in heterogenous tissues. The mRNA in situ detection performed on bone marrow biopsies derived from AML patients, showed a diffuse localization pattern of WNT10B molecule in the tissue. Conversely, only the AC133bright cell population shows the Wnt signaling activation signature represented by the cytoplasmatic accumulation and nuclear translocation of the active form of \u3b2-catenin. In spite of this, we previously evidenced that the regenerative function of WNT signaling pathway is defined by the up-regulation of WNT10B, WNT10A, WNT2B and WNT6 loci, we identified the WNT10B as a major locus associated with the regenerative function and over-expressed by all AML patients. By the molecular evaluation of the WNT10B transcript, we isolated an aberrant splicing variant (WNT10BIVS1), that identify Non Core-Binding Factor Leukemia (NCBFL) class and whose potential role is discussed. Moreover, we demonstrate that the function of "leukemia stem cell", present in the cell population enriched for the marker AC133bright, is strictly related to regenerative function associated with WNT signaling, defining the key role of WNT10B ligand as a specific molecular marker for leuchemogenesis. This thesis defines the new suitable approaches to characterize the leukemia-initiating cells (LICs) and suggest the role of WNT10B as a new suitable target for AML

    Discovering lesser known molecular players and mechanistic patterns in Alzheimer's disease using an integrative disease modelling approach

    Get PDF
    Convergence of exponentially advancing technologies is driving medical research with life changing discoveries. On the contrary, repeated failures of high-profile drugs to battle Alzheimer's disease (AD) has made it one of the least successful therapeutic area. This failure pattern has provoked researchers to grapple with their beliefs about Alzheimer's aetiology. Thus, growing realisation that Amyloid-β and tau are not 'the' but rather 'one of the' factors necessitates the reassessment of pre-existing data to add new perspectives. To enable a holistic view of the disease, integrative modelling approaches are emerging as a powerful technique. Combining data at different scales and modes could considerably increase the predictive power of the integrative model by filling biological knowledge gaps. However, the reliability of the derived hypotheses largely depends on the completeness, quality, consistency, and context-specificity of the data. Thus, there is a need for agile methods and approaches that efficiently interrogate and utilise existing public data. This thesis presents the development of novel approaches and methods that address intrinsic issues of data integration and analysis in AD research. It aims to prioritise lesser-known AD candidates using highly curated and precise knowledge derived from integrated data. Here much of the emphasis is put on quality, reliability, and context-specificity. This thesis work showcases the benefit of integrating well-curated and disease-specific heterogeneous data in a semantic web-based framework for mining actionable knowledge. Furthermore, it introduces to the challenges encountered while harvesting information from literature and transcriptomic resources. State-of-the-art text-mining methodology is developed to extract miRNAs and its regulatory role in diseases and genes from the biomedical literature. To enable meta-analysis of biologically related transcriptomic data, a highly-curated metadata database has been developed, which explicates annotations specific to human and animal models. Finally, to corroborate common mechanistic patterns — embedded with novel candidates — across large-scale AD transcriptomic data, a new approach to generate gene regulatory networks has been developed. The work presented here has demonstrated its capability in identifying testable mechanistic hypotheses containing previously unknown or emerging knowledge from public data in two major publicly funded projects for Alzheimer's, Parkinson's and Epilepsy diseases

    Soft matter roadmap

    Get PDF
    Soft materials are usually defined as materials made of mesoscopic entities, often self-organised, sensitive to thermal fluctuations and to weak perturbations. Archetypal examples are colloids, polymers, amphiphiles, liquid crystals, foams. The importance of soft materials in everyday commodity products, as well as in technological applications, is enormous, and controlling or improving their properties is the focus of many efforts. From a fundamental perspective, the possibility of manipulating soft material properties, by tuning interactions between constituents and by applying external perturbations, gives rise to an almost unlimited variety in physical properties. Together with the relative ease to observe and characterise them, this renders soft matter systems powerful model systems to investigate statistical physics phenomena, many of them relevant as well to hard condensed matter systems. Understanding the emerging properties from mesoscale constituents still poses enormous challenges, which have stimulated a wealth of new experimental approaches, including the synthesis of new systems with, e.g. tailored self-assembling properties, or novel experimental techniques in imaging, scattering or rheology. Theoretical and numerical methods, and coarse-grained models, have become central to predict physical properties of soft materials, while computational approaches that also use machine learning tools are playing a progressively major role in many investigations. This Roadmap intends to give a broad overview of recent and possible future activities in the field of soft materials, with experts covering various developments and challenges in material synthesis and characterisation, instrumental, simulation and theoretical methods as well as general concepts
    • …
    corecore