585 research outputs found

    Unified locally linear classifiers with diversity-promoting anchor points

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centres in Singapore Funding Initiativ

    λ”₯λŸ¬λ‹ 기반 생성 λͺ¨λΈμ„ μ΄μš©ν•œ μžμ—°μ–΄μ²˜λ¦¬ 데이터 증강 기법

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀,2020. 2. 이상ꡬ.Recent advances in generation capability of deep learning models have spurred interest in utilizing deep generative models for unsupervised generative data augmentation (GDA). Generative data augmentation aims to improve the performance of a downstream machine learning model by augmenting the original dataset with samples generated from a deep latent variable model. This data augmentation approach is attractive to the natural language processing community, because (1) there is a shortage of text augmentation techniques that require little supervision and (2) resource scarcity being prevalent. In this dissertation, we explore the feasibility of exploiting deep latent variable models for data augmentation on three NLP tasks: sentence classification, spoken language understanding (SLU) and dialogue state tracking (DST), represent NLP tasks of various complexities and properties -- SLU requires multi-task learning of text classification and sequence tagging, while DST requires the understanding of hierarchical and recurrent data structures. For each of the three tasks, we propose a task-specific latent variable model based on conditional, hierarchical and sequential variational autoencoders (VAE) for multi-modal joint modeling of linguistic features and the relevant annotations. We conduct extensive experiments to statistically justify our hypothesis that deep generative data augmentation is beneficial for all subject tasks. Our experiments show that deep generative data augmentation is effective for the select tasks, supporting the idea that the technique can potentially be utilized for other range of NLP tasks. Ablation and qualitative studies reveal deeper insight into the underlying mechanisms of generative data augmentation. As a secondary contribution, we also shed light onto the recurring posterior collapse phenomenon in autoregressive VAEs and, subsequently, propose novel techniques to reduce the model risk, which is crucial for proper training of complex VAE models, enabling them to synthesize better samples for data augmentation. In summary, this work intends to demonstrate and analyze the effectiveness of unsupervised generative data augmentation in NLP. Ultimately, our approach enables standardized adoption of generative data augmentation, which can be applied orthogonally to existing regularization techniques.졜근 λ”₯λŸ¬λ‹ 기반 생성 λͺ¨λΈμ˜ κΈ‰κ²©ν•œ λ°œμ „μœΌλ‘œ 이λ₯Ό μ΄μš©ν•œ 생성 기반 데이터 증강 기법(generative data augmentation, GDA)의 μ‹€ν˜„ κ°€λŠ₯성에 λŒ€ν•œ κΈ°λŒ€κ°€ 컀지고 μžˆλ‹€. 생성 기반 데이터 증강 기법은 λ”₯λŸ¬λ‹ 기반 μž μž¬λ³€μˆ˜ λͺ¨λΈμ—μ„œ 생성 된 μƒ˜ν”Œμ„ 원본 데이터셋에 μΆ”κ°€ν•˜μ—¬ μ—°κ΄€λœ νƒœμŠ€ν¬μ˜ μ„±λŠ₯을 ν–₯μƒμ‹œν‚€λŠ” κΈ°μˆ μ„ μ˜λ―Έν•œλ‹€. λ”°λΌμ„œ 생성 기반 데이터 증강 기법은 데이터 κ³΅κ°„μ—μ„œ μ΄λ€„μ§€λŠ” μ •κ·œν™” 기술의 ν•œ ν˜•νƒœλ‘œ 간주될 수 μžˆλ‹€. μ΄λŸ¬ν•œ λ”₯λŸ¬λ‹ 기반 생성 λͺ¨λΈμ˜ μƒˆλ‘œμš΄ ν™œμš© κ°€λŠ₯성은 μžμ—°μ–΄μ²˜λ¦¬ λΆ„μ•Όμ—μ„œ λ”μš± μ€‘μš”ν•˜κ²Œ λΆ€κ°λ˜λŠ” μ΄μœ λŠ” (1) λ²”μš© κ°€λŠ₯ν•œ ν…μŠ€νŠΈ 데이터 증강 기술의 λΆ€μž¬μ™€ (2) ν…μŠ€νŠΈ λ°μ΄ν„°μ˜ ν¬μ†Œμ„±μ„ 극볡할 수 μžˆλŠ” λŒ€μ•ˆμ΄ ν•„μš”ν•˜κΈ° λ•Œλ¬Έμ΄λ‹€. 문제의 λ³΅μž‘λ„μ™€ νŠΉμ§•μ„ 골고루 μ±„μ§‘ν•˜κΈ° μœ„ν•΄ λ³Έ λ…Όλ¬Έμ—μ„œλŠ” ν…μŠ€νŠΈ λΆ„λ₯˜(text classification), 순차적 λ ˆμ΄λΈ”λ§κ³Ό λ©€ν‹°νƒœμŠ€ν‚Ή 기술이 ν•„μš”ν•œ λ°œν™” 이해(spoken language understanding, SLU), 계측적이며 μž¬κ·€μ μΈ 데이터 ꡬ쑰에 λŒ€ν•œ κ³ λ €κ°€ ν•„μš”ν•œ λŒ€ν™” μƒνƒœ 좔적(dialogue state tracking, DST) λ“± μ„Έ 가지 λ¬Έμ œμ—μ„œ λ”₯λŸ¬λ‹ 기반 생성 λͺ¨λΈμ„ ν™œμš©ν•œ 데이터 증강 κΈ°λ²•μ˜ 타당성에 λŒ€ν•΄ 닀룬닀. λ³Έ μ—°κ΅¬μ—μ„œλŠ” 쑰건뢀, 계측적 및 순차적 variational autoencoder (VAE)에 κΈ°λ°˜ν•˜μ—¬ 각 μžμ—°μ–΄μ²˜λ¦¬ λ¬Έμ œμ— νŠΉν™”λœ ν…μŠ€νŠΈ 및 μ—°κ΄€ λΆ€μ°© 정보λ₯Ό λ™μ‹œμ— μƒμ„±ν•˜λŠ” 특수 λ”₯λŸ¬λ‹ 생성 λͺ¨λΈλ“€μ„ μ œμ‹œν•˜κ³ , λ‹€μ–‘ν•œ ν•˜λ₯˜ λͺ¨λΈκ³Ό 데이터셋을 λ‹€λ£¨λŠ” λ“± 폭 넓은 μ‹€ν—˜μ„ 톡해 λ”₯ 생성 λͺ¨λΈ 기반 데이터 증강 κΈ°λ²•μ˜ 효과λ₯Ό ν†΅κ³„μ μœΌλ‘œ μž…μ¦ν•˜μ˜€λ‹€. λΆ€μˆ˜μ  μ—°κ΅¬μ—μ„œλŠ” μžκΈ°νšŒκ·€μ (autoregressive) VAEμ—μ„œ 빈번히 λ°œμƒν•˜λŠ” posterior collapse λ¬Έμ œμ— λŒ€ν•΄ νƒκ΅¬ν•˜κ³ , ν•΄λ‹Ή 문제λ₯Ό μ™„ν™”ν•  수 μžˆλŠ” μ‹ κ·œ λ°©μ•ˆλ„ μ œμ•ˆν•œλ‹€. ν•΄λ‹Ή 방법을 생성적 데이터 증강에 ν•„μš”ν•œ λ³΅μž‘ν•œ VAE λͺ¨λΈμ— μ μš©ν•˜μ˜€μ„ λ•Œ, 생성 λͺ¨λΈμ˜ 생성 질이 ν–₯μƒλ˜μ–΄ 데이터 증강 νš¨κ³Όμ—λ„ 긍정적인 영ν–₯을 λ―ΈμΉ  수 μžˆμŒμ„ κ²€μ¦ν•˜μ˜€λ‹€. λ³Έ 논문을 톡해 μžμ—°μ–΄μ²˜λ¦¬ λΆ„μ•Όμ—μ„œ κΈ°μ‘΄ μ •κ·œν™” 기법과 병행 적용 κ°€λŠ₯ν•œ 비지도 ν˜•νƒœμ˜ 데이터 증강 κΈ°λ²•μ˜ ν‘œμ€€ν™”λ₯Ό κΈ°λŒ€ν•΄ λ³Ό 수 μžˆλ‹€.1 Introduction 1 1.1 Motivation 1 1.2 Dissertation Overview 6 2 Background and Related Work 8 2.1 Deep Latent Variable Models 8 2.1.1 Variational Autoencoder (VAE) 10 2.1.2 Deep Generative Models and Text Generation 12 2.2 Data Augmentation 12 2.2.1 General Description 13 2.2.2 Categorization of Data Augmentation 14 2.2.3 Theoretical Explanations 21 2.3 Summary 24 3 Basic Task: Text Classi cation 25 3.1 Introduction 25 3.2 Our Approach 28 3.2.1 Proposed Models 28 3.2.2 Training with I-VAE 29 3.3 Experiments 31 3.3.1 Datasets 32 3.3.2 Experimental Settings 33 3.3.3 Implementation Details 34 3.3.4 Data Augmentation Results 36 3.3.5 Ablation Studies 39 3.3.6 Qualitative Analysis 40 3.4 Summary 45 4 Multi-task Learning: Spoken Language Understanding 46 4.1 Introduction 46 4.2 Related Work 48 4.3 Model Description 48 4.3.1 Framework Formulation 48 4.3.2 Joint Generative Model 49 4.4 Experiments 56 4.4.1 Datasets 56 4.4.2 Experimental Settings 57 4.4.3 Generative Data Augmentation Results 61 4.4.4 Comparison to Other State-of-the-art Results 63 4.4.5 Ablation Studies 63 4.5 Summary 67 5 Complex Data: Dialogue State Tracking 68 5.1 Introduction 68 5.2 Background and Related Work 70 5.2.1 Task-oriented Dialogue 70 5.2.2 Dialogue State Tracking 72 5.2.3 Conversation Modeling 72 5.3 Variational Hierarchical Dialogue Autoencoder (VHDA) 73 5.3.1 Notations 73 5.3.2 Variational Hierarchical Conversational RNN 74 5.3.3 Proposed Model 75 5.3.4 Posterior Collapse 82 5.4 Experimental Results 84 5.4.1 Experimental Settings 84 5.4.2 Data Augmentation Results 90 5.4.3 Intrinsic Evaluation - Language Evaluation 94 5.4.4 Qualitative Results 95 5.5 Summary 101 6 Conclusion 103 6.1 Summary 103 6.2 Limitations 104 6.3 Future Work 105Docto

    Doctor of Philosophy

    Get PDF
    dissertationEvents are one important type of information throughout text. Event extraction is an information extraction (IE) task that involves identifying entities and objects (mainly noun phrases) that represent important roles in events of a particular type. However, the extraction performance of current event extraction systems is limited because they mainly consider local context (mostly isolated sentences) when making each extraction decision. My research aims to improve both coverage and accuracy of event extraction performance by explicitly identifying event contexts before extracting individual facts. First, I introduce new event extraction architectures that incorporate discourse information across a document to seek out and validate pieces of event descriptions within the document. TIER is a multilayered event extraction architecture that performs text analysis at multiple granularities to progressively \zoom in" on relevant event information. LINKER is a unied discourse-guided approach that includes a structured sentence classier to sequentially read a story and determine which sentences contain event information based on both the local and preceding contexts. Experimental results on two distinct event domains show that compared to previous event extraction systems, TIER can nd more event information while maintaining a good extraction accuracy, and LINKER can further improve extraction accuracy. Finding documents that describe a specic type of event is also highly challenging because of the wide variety and ambiguity of event expressions. In this dissertation, I present the multifaceted event recognition approach that uses event dening characteristics (facets), in addition to event expressions, to eectively resolve the complexity of event descriptions. I also present a novel bootstrapping algorithm to automatically learn event expressions as well as facets of events, which requires minimal human supervision. Experimental results show that the multifaceted event recognition approach can eectively identify documents that describe a particular type of event and make event extraction systems more precise

    Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

    Full text link
    There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence

    Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

    Get PDF
    The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model.Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution. The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences is considered for the bounding box quality criterion, so that the bestqualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90β—¦ and 0β—¦ simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement

    Data-Driven Representation Learning in Multimodal Feature Fusion

    Get PDF
    abstract: Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction. We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems. In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • …
    corecore