705 research outputs found

    Unconstrained fuzzy feature fusion for heterogeneous unsupervised domain adaptation

    Full text link
    © 2018 IEEE. Domain adaptation can transfer knowledge from the source domain to improve pattern recognition accuracy in the target domain. However, it is rarely discussed when the target domain is unlabeled and heterogeneous with the source domain, which is a very challenging problem in the domain adaptation field. This paper presents a new feature reconstruction method: unconstrained fuzzy feature fusion. Through the reconstructed features of a source and a target domain, a geodesic flow kernel is applied to transfer knowledge between them. Furthermore, the original information of the target domain is also preserved when reconstructing the features of the two domains. Compared to the previous work, this work has two advantages: 1) the sum of the memberships of the original features to fuzzy features no longer must be one, and 2) the original information of the target domain is persevered. As a result of these advantages, this work delivers a better performance than previous studies using two public datasets

    Self-adjustable domain adaptation in personalized ECG monitoring integrated with IR-UWB radar

    Get PDF
    To enhance electrocardiogram (ECG) monitoring systems in personalized detections, deep neural networks (DNNs) are applied to overcome individual differences by periodical retraining. As introduced previously [4], DNNs relieve individual differences by fusing ECG with impulse radio ultra-wide band (IR-UWB) radar. However, such DNN-based ECG monitoring system tends to overfit into personal small datasets and is difficult to generalize to newly collected unlabeled data. This paper proposes a self-adjustable domain adaptation (SADA) strategy to prevent from overfitting and exploit unlabeled data. Firstly, this paper enlarges the database of ECG and radar data with actual records acquired from 28 testers and expanded by the data augmentation. Secondly, to utilize unlabeled data, SADA combines self organizing maps with the transfer learning in predicting labels. Thirdly, SADA integrates the one-class classification with domain adaptation algorithms to reduce overfitting. Based on our enlarged database and standard databases, a large dataset of 73200 records and a small one of 1849 records are built up to verify our proposal. Results show SADA\u27s effectiveness in predicting labels and increments in the sensitivity of DNNs by 14.4% compared with existing domain adaptation algorithms

    A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios

    Full text link
    © 2019 IEEE. How to leverage knowledge from labelled domain (source) to help classify unlabeled domain (target) is a key problem in the machine learning field. Unsupervised domain adaptation (UDA) provides a solution to this problem and has been well developed for two homogeneous domains. However, when the target domain is unlabeled and heterogeneous with the source domain, current UDA models cannot accurately transfer knowledge from a source domain to a target domain. Benefiting from development of neural networks, this paper presents a new neural network, shared fuzzy equivalence relations neural network (SFER-NN), to address the heterogeneous UDA (HeUDA) problem. SFER-NN transfers knowledge across two domains according to shared fuzzy equivalence relations that can simultaneously cluster features of two domains into several categories. Based on the clustered categories, SFER-NN is constructed to minimize the discrepancy between two domains. Compared to previous works, SFER-NN is more capable of minimizing discrepancy between two domains. As a result of this advantage, SFER-NN delivers a better performance than previous studies using two public datasets

    Towards Realistic Transfer Learning Methods: Theory and Algorithms

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Transfer learning aims to leverage knowledge from domains with abundant labels (i.e., source domains) to help train a classifier or predictor for the domain with insufficient labels (i.e., target domain). The trained classifier or predictor is expected to have better performance (e.g., higher accuracy) than classifiers only trained with data in the target domain. Although recent research of transfer learning has shown a decent ability to transfer knowledge from a source domain to a target domain, most research require certain assumptions to ensure their efficacy. These assumptions are probably not realistic, which means that existing transfer learning methods still face several unsolved and challenging problems in real world. This thesis aims to address four orthogonal problems faced by existing transfer learning methods: 1) How to test if feature spaces of two domains are from different distributions; 2) How to transfer knowledge when labels in the source domain cannot be perfectly annotated (i.e., the source domain contains noisy labels); 3) How to transfer knowledge when source and target domains have different dimensions (i.e., heterogeneous scenario); and 4) How to transfer knowledge across multiple source domains and a different-dimension target domain. To address Problem 1), this thesis presents two new two-sample tests to test if the feature spaces of source domains and target domain are from different distributions. One is suitable for low-dimension data (Chapter 3) and another for high-dimension data (Chapter 4). If feature spaces of domains are statistically different, we need to use transfer learning methods on these domains. Moreover, the test statistics used in the proposed tests can be used to measure the distributional discrepancy between two domains. To address Problem 2), this thesis presents a theoretical bound to show that existing transfer learning methods cannot work well when a source domain contain noisy labels. Then, a novel transfer learning approach is proposed to transfer knowledge across a source domain (with noisy labels) and a target domain. Finally, a generalization bound is proved to explain why the proposed method can reliably transfer knowledge across domains in noisy scenario (Chapter 5). To address Problem 3), the most challenging problem in the field of domain adaptation, Chapter 6 presents a theorem to show when we can reliably transfer knowledge across two different-dimension (i.e., heterogeneous) domains and propose a solution to this problem. Since methods in Chapter 6 assume that the number of samples in two domains must be the same (i.e., two balanced domains), Chapter 7 presents a novel fuzzy-relation based method to transfer knowledge across two imbalanced domains. To address Problem 4), Chapter 8 presents a novel fuzzy-relation neural network to transfer knowledge from multiple source domains to a target domain, where any of two domains are heterogeneous (i.e., feature spaces of any of two domains have different dimensions). To conclude, this thesis not only propose a set of effective methods for realistic transfer learning, but also contribute to theory of transfer learning

    Design for novel enhanced weightless neural network and multi-classifier.

    Get PDF
    Weightless neural systems have often struggles in terms of speed, performances, and memory issues. There is also lack of sufficient interfacing of weightless neural systems to others systems. Addressing these issues motivates and forms the aims and objectives of this thesis. In addressing these issues, algorithms are formulated, classifiers, and multi-classifiers are designed, and hardware design of classifier are also reported. Specifically, the purpose of this thesis is to report on the algorithms and designs of weightless neural systems. A background material for the research is a weightless neural network known as Probabilistic Convergent Network (PCN). By introducing two new and different interfacing method, the word "Enhanced" is added to PCN thereby giving it the name Enhanced Probabilistic Convergent Network (EPCN). To solve the problem of speed and performances when large-class databases are employed in data analysis, multi-classifiers are designed whose composition vary depending on problem complexity. It also leads to the introduction of a novel gating function with application of EPCN as an intelligent combiner. For databases which are not very large, single classifiers suffices. Speed and ease of application in adverse condition were considered as improvement which has led to the design of EPCN in hardware. A novel hashing function is implemented and tested on hardware-based EPCN. Results obtained have indicated the utility of employing weightless neural systems. The results obtained also indicate significant new possible areas of application of weightless neural systems

    Multi-view Fuzzy Representation Learning with Rules based Model

    Full text link
    Unsupervised multi-view representation learning has been extensively studied for mining multi-view data. However, some critical challenges remain. On the one hand, the existing methods cannot explore multi-view data comprehensively since they usually learn a common representation between views, given that multi-view data contains both the common information between views and the specific information within each view. On the other hand, to mine the nonlinear relationship between data, kernel or neural network methods are commonly used for multi-view representation learning. However, these methods are lacking in interpretability. To this end, this paper proposes a new multi-view fuzzy representation learning method based on the interpretable Takagi-Sugeno-Kang (TSK) fuzzy system (MVRL_FS). The method realizes multi-view representation learning from two aspects. First, multi-view data are transformed into a high-dimensional fuzzy feature space, while the common information between views and specific information of each view are explored simultaneously. Second, a new regularization method based on L_(2,1)-norm regression is proposed to mine the consistency information between views, while the geometric structure of the data is preserved through the Laplacian graph. Finally, extensive experiments on many benchmark multi-view datasets are conducted to validate the superiority of the proposed method.Comment: This work has been accepted by IEEE Transactions on Knowledge and Data Engineerin

    Adapting heterogeneous ensembles with particle swarm optimization for video face recognition

    Get PDF
    In video-based face recognition applications, matching is typically performed by comparing query samples against biometric models (i.e., an individual’s facial model) that is designed with reference samples captured during an enrollment process. Although statistical and neural pattern classifiers may represent a flexible solution to this kind of problem, their performance depends heavily on the availability of representative reference data. With operators involved in the data acquisition process, collection and analysis of reference data is often expensive and time consuming. However, although a limited amount of data is initially available during enrollment, new reference data may be acquired and labeled by an operator over time. Still, due to a limited control over changing operational conditions and personal physiology, classification systems used for video-based face recognition are confronted to complex and changing pattern recognition environments. This thesis concerns adaptive multiclassifier systems (AMCSs) for incremental learning of new data during enrollment and update of biometric models. To avoid knowledge (facial models) corruption over time, the proposed AMCS uses a supervised incremental learning strategy based on dynamic particle swarm optimization (DPSO) to evolve a swarm of fuzzy ARTMAP (FAM) neural networks in response to new data. As each particle in a FAM hyperparameter search space corresponds to a FAM network, the learning strategy adapts learning dynamics by co-optimizing all their parameters – hyperparameters, weights, and architecture – in order to maximize accuracy, while minimizing computational cost and memory resources. To achieve this, the relationship between the classification and optimization environments is studied and characterized, leading to these additional contributions. An initial version of this DPSO-based incremental learning strategy was applied to an adaptive classification system (ACS), where the accuracy of a single FAM neural network is maximized. It is shown that the original definition of a classification system capable of supervised incremental learning must be reconsidered in two ways. Not only must a classifier’s learning dynamics be adapted to maintain a high level of performance through time, but some previously acquired learning validation data must also be used during adaptation. It is empirically shown that adapting a FAM during incremental learning constitutes a type III dynamic optimization problem in the search space, where the local optima values and their corresponding position change in time. Results also illustrate the necessity of a long term memory (LTM) to store previously acquired data for unbiased validation and performance estimation. The DPSO-based incremental learning strategy was then modified to evolve the swarm (or pool) of FAM networks within an AMCS. A key element for the success of ensembles is tackled: classifier diversity. With several correlation and diversity indicators, it is shown that genoVIII type (i.e., hyperparameters) diversity in the optimization environment is correlated with classifier diversity in the classification environment. Following this result, properties of a DPSO algorithm that seeks to maintain genotype particle diversity to detect and follow local optima are exploited to generate and evolve diversified pools of FAMclassifiers. Furthermore, a greedy search algorithm is presented to perform an efficient ensemble selection based on accuracy and genotype diversity. This search algorithm allows for diversified ensembles without evaluating costly classifier diversity indicators, and selected ensembles also yield accuracy comparable to that of reference ensemble-based and batch learning techniques, with only a fraction of the resources. Finally, after studying the relationship between the classification environment and the search space, the objective space of the optimization environment is also considered. An aggregated dynamical niching particle swarm optimization (ADNPSO) algorithm is presented to guide the FAM networks according two objectives: FAM accuracy and computational cost. Instead of purely solving a multi-objective optimization problem to provide a Pareto-optimal front, the ADNPSO algorithm aims to generate pools of classifiers among which both genotype and phenotype (i.e., objectives) diversity are maximized. ADNPSO thus uses information in the search spaces to guide particles towards different local Pareto-optimal fronts in the objective space. A specialized archive is then used to categorize solutions according to FAMnetwork size and then capture locally non-dominated classifiers. These two components are then integrated to the AMCS through an ADNPSO-based incremental learning strategy. The AMCSs proposed in this thesis are promising since they create ensembles of classifiers designed with the ADNPSO-based incremental learning strategy and provide a high level of accuracy that is statistically comparable to that obtained through mono-objective optimization and reference batch learning techniques, and yet requires a fraction of the computational cost

    Development of soft computing and applications in agricultural and biological engineering

    Get PDF
    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and applied in the last three decades for scientific research and engineering computing. In agricultural and biological engineering, researchers and engineers have developed methods of fuzzy logic, artificial neural networks, genetic algorithms, decision trees, and support vector machines to study soil and water regimes related to crop growth, analyze the operation of food processing, and support decision-making in precision farming. This paper reviews the development of soft computing techniques. With the concepts and methods, applications of soft computing in the field of agricultural and biological engineering are presented, especially in the soil and water context for crop management and decision support in precision agriculture. The future of development and application of soft computing in agricultural and biological engineering is discussed

    Data-efficient methods for information extraction

    Get PDF
    Strukturierte Wissensrepräsentationssysteme wie Wissensdatenbanken oder Wissensgraphen bieten Einblicke in Entitäten und Beziehungen zwischen diesen Entitäten in der realen Welt. Solche Wissensrepräsentationssysteme können in verschiedenen Anwendungen der natürlichen Sprachverarbeitung eingesetzt werden, z. B. bei der semantischen Suche, der Beantwortung von Fragen und der Textzusammenfassung. Es ist nicht praktikabel und ineffizient, diese Wissensrepräsentationssysteme manuell zu befüllen. In dieser Arbeit entwickeln wir Methoden, um automatisch benannte Entitäten und Beziehungen zwischen den Entitäten aus Klartext zu extrahieren. Unsere Methoden können daher verwendet werden, um entweder die bestehenden unvollständigen Wissensrepräsentationssysteme zu vervollständigen oder ein neues strukturiertes Wissensrepräsentationssystem von Grund auf zu erstellen. Im Gegensatz zu den gängigen überwachten Methoden zur Informationsextraktion konzentrieren sich unsere Methoden auf das Szenario mit wenigen Daten und erfordern keine große Menge an kommentierten Daten. Im ersten Teil der Arbeit haben wir uns auf das Problem der Erkennung von benannten Entitäten konzentriert. Wir haben an der gemeinsamen Aufgabe von Bacteria Biotope 2019 teilgenommen. Die gemeinsame Aufgabe besteht darin, biomedizinische Entitätserwähnungen zu erkennen und zu normalisieren. Unser linguistically informed Named-Entity-Recognition-System besteht aus einem Deep-Learning-basierten Modell, das sowohl verschachtelte als auch flache Entitäten extrahieren kann; unser Modell verwendet mehrere linguistische Merkmale und zusätzliche Trainingsziele, um effizientes Lernen in datenarmen Szenarien zu ermöglichen. Unser System zur Entitätsnormalisierung verwendet String-Match, Fuzzy-Suche und semantische Suche, um die extrahierten benannten Entitäten mit den biomedizinischen Datenbanken zu verknüpfen. Unser System zur Erkennung von benannten Entitäten und zur Entitätsnormalisierung erreichte die niedrigste Slot-Fehlerrate von 0,715 und belegte den ersten Platz in der gemeinsamen Aufgabe. Wir haben auch an zwei gemeinsamen Aufgaben teilgenommen: Adverse Drug Effect Span Detection (Englisch) und Profession Span Detection (Spanisch); beide Aufgaben sammeln Daten von der Social Media Plattform Twitter. Wir haben ein Named-Entity-Recognition-Modell entwickelt, das die Eingabedarstellung des Modells durch das Stapeln heterogener Einbettungen aus verschiedenen Domänen verbessern kann; unsere empirischen Ergebnisse zeigen komplementäres Lernen aus diesen heterogenen Einbettungen. Unser Beitrag belegte den 3. Platz in den beiden gemeinsamen Aufgaben. Im zweiten Teil der Arbeit untersuchten wir Strategien zur Erweiterung synthetischer Daten, um ressourcenarme Informationsextraktion in spezialisierten Domänen zu ermöglichen. Insbesondere haben wir backtranslation an die Aufgabe der Erkennung von benannten Entitäten auf Token-Ebene und der Extraktion von Beziehungen auf Satzebene angepasst. Wir zeigen, dass die Rückübersetzung sprachlich vielfältige und grammatikalisch kohärente synthetische Sätze erzeugen kann und als wettbewerbsfähige Erweiterungsstrategie für die Aufgaben der Erkennung von benannten Entitäten und der Extraktion von Beziehungen dient. Bei den meisten realen Aufgaben zur Extraktion von Beziehungen stehen keine kommentierten Daten zur Verfügung, jedoch ist häufig ein großer unkommentierter Textkorpus vorhanden. Bootstrapping-Methoden zur Beziehungsextraktion können mit diesem großen Korpus arbeiten, da sie nur eine Handvoll Startinstanzen benötigen. Bootstrapping-Methoden neigen jedoch dazu, im Laufe der Zeit Rauschen zu akkumulieren (bekannt als semantische Drift), und dieses Phänomen hat einen drastischen negativen Einfluss auf die endgültige Genauigkeit der Extraktionen. Wir entwickeln zwei Methoden zur Einschränkung des Bootstrapping-Prozesses, um die semantische Drift bei der Extraktion von Beziehungen zu minimieren. Unsere Methoden nutzen die Graphentheorie und vortrainierte Sprachmodelle, um verrauschte Extraktionsmuster explizit zu identifizieren und zu entfernen. Wir berichten über die experimentellen Ergebnisse auf dem TACRED-Datensatz für vier Relationen. Im letzten Teil der Arbeit demonstrieren wir die Anwendung der Domänenanpassung auf die anspruchsvolle Aufgabe der mehrsprachigen Akronymextraktion. Unsere Experimente zeigen, dass die Domänenanpassung die Akronymextraktion in wissenschaftlichen und juristischen Bereichen in sechs Sprachen verbessern kann, darunter auch Sprachen mit geringen Ressourcen wie Persisch und Vietnamesisch.The structured knowledge representation systems such as knowledge base or knowledge graph can provide insights regarding entities and relationship(s) among these entities in the real-world, such knowledge representation systems can be employed in various natural language processing applications such as semantic search, question answering and text summarization. It is infeasible and inefficient to manually populate these knowledge representation systems. In this work, we develop methods to automatically extract named entities and relationships among the entities from plain text and hence our methods can be used to either complete the existing incomplete knowledge representation systems to create a new structured knowledge representation system from scratch. Unlike mainstream supervised methods for information extraction, our methods focus on the low-data scenario and do not require a large amount of annotated data. In the first part of the thesis, we focused on the problem of named entity recognition. We participated in the shared task of Bacteria Biotope 2019, the shared task consists of recognizing and normalizing the biomedical entity mentions. Our linguistically informed named entity recognition system consists of a deep learning based model which can extract both nested and flat entities; our model employed several linguistic features and auxiliary training objectives to enable efficient learning in data-scarce scenarios. Our entity normalization system employed string match, fuzzy search and semantic search to link the extracted named entities to the biomedical databases. Our named entity recognition and entity normalization system achieved the lowest slot error rate of 0.715 and ranked first in the shared task. We also participated in two shared tasks of Adverse Drug Effect Span detection (English) and Profession Span Detection (Spanish); both of these tasks collect data from the social media platform Twitter. We developed a named entity recognition model which can improve the input representation of the model by stacking heterogeneous embeddings from a diverse domain(s); our empirical results demonstrate complementary learning from these heterogeneous embeddings. Our submission ranked 3rd in both of the shared tasks. In the second part of the thesis, we explored synthetic data augmentation strategies to address low-resource information extraction in specialized domains. Specifically, we adapted backtranslation to the token-level task of named entity recognition and sentence-level task of relation extraction. We demonstrate that backtranslation can generate linguistically diverse and grammatically coherent synthetic sentences and serve as a competitive augmentation strategy for the task of named entity recognition and relation extraction. In most of the real-world relation extraction tasks, the annotated data is not available, however, quite often a large unannotated text corpus is available. Bootstrapping methods for relation extraction can operate on this large corpus as they only require a handful of seed instances. However, bootstrapping methods tend to accumulate noise over time (known as semantic drift) and this phenomenon has a drastic negative impact on the final precision of the extractions. We develop two methods to constrain the bootstrapping process to minimise semantic drift for relation extraction; our methods leverage graph theory and pre-trained language models to explicitly identify and remove noisy extraction patterns. We report the experimental results on the TACRED dataset for four relations. In the last part of the thesis, we demonstrate the application of domain adaptation to the challenging task of multi-lingual acronym extraction. Our experiments demonstrate that domain adaptation can improve acronym extraction within scientific and legal domains in 6 languages including low-resource languages such as Persian and Vietnamese
    • …
    corecore