5,118 research outputs found

    Confidence Calibration for Systems with Cascaded Predictive Modules

    Full text link
    Existing conformal prediction algorithms estimate prediction intervals at target confidence levels to characterize the performance of a regression model on new test samples. However, considering an autonomous system consisting of multiple modules, prediction intervals constructed for individual modules fall short of accommodating uncertainty propagation over different modules and thus cannot provide reliable predictions on system behavior. We address this limitation and present novel solutions based on conformal prediction to provide prediction intervals calibrated for a predictive system consisting of cascaded modules (e.g., an upstream feature extraction module and a downstream regression module). Our key idea is to leverage module-level validation data to characterize the system-level error distribution without direct access to end-to-end validation data. We provide theoretical justification and empirical experimental results to demonstrate the effectiveness of proposed solutions. In comparison to prediction intervals calibrated for individual modules, our solutions generate improved intervals with more accurate performance guarantees for system predictions, which are demonstrated on both synthetic systems and real-world systems performing overlap prediction for indoor navigation using the Matterport3D dataset

    Diaqua­bis(N,N′-dibenzyl­ethane-1,2-diamine-κ2 N,N′)nickel(II) dichloride N,N-dimethyl­formamide solvate

    Get PDF
    The asymmetric unit of the title complex, [Ni(C16H20N2)2(H2O)2]Cl2·C3H7NO, consists of two NiII atoms, each lying on an inversion center, two Cl anions, two N,N′-dibenzyl­ethane-1,2-diamine ligands, two coordinated water mol­ecules and one N,N-dimethyl­formamide solvent mol­ecule. Each NiII atom is six-coordinated in a distorted octa­hedral coordination geometry, with the equatorial plane formed by four N atoms and the axial positions occupied by two water mol­ecules. The complex mol­ecules are linked into a chain along [001] by N—H⋯Cl, N—H⋯O and O—H⋯Cl hydrogen bonds. The C atoms and H atoms of the solvent mol­ecule are disordered over two sites in a ratio of 0.52 (2):0.48 (2)

    Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition

    Full text link
    Low-resource automatic speech recognition (ASR) is challenging, as the low-resource target language data cannot well train an ASR model. To solve this issue, meta-learning formulates ASR for each source language into many small ASR tasks and meta-learns a model initialization on all tasks from different source languages to access fast adaptation on unseen target languages. However, for different source languages, the quantity and difficulty vary greatly because of their different data scales and diverse phonological systems, which leads to task-quantity and task-difficulty imbalance issues and thus a failure of multilingual meta-learning ASR (MML-ASR). In this work, we solve this problem by developing a novel adversarial meta sampling (AMS) approach to improve MML-ASR. When sampling tasks in MML-ASR, AMS adaptively determines the task sampling probability for each source language. Specifically, for each source language, if the query loss is large, it means that its tasks are not well sampled to train ASR model in terms of its quantity and difficulty and thus should be sampled more frequently for extra learning. Inspired by this fact, we feed the historical task query loss of all source language domain into a network to learn a task sampling policy for adversarially increasing the current query loss of MML-ASR. Thus, the learnt task sampling policy can master the learning situation of each language and thus predicts good task sampling probability for each language for more effective learning. Finally, experiment results on two multilingual datasets show significant performance improvement when applying our AMS on MML-ASR, and also demonstrate the applicability of AMS to other low-resource speech tasks and transfer learning ASR approaches.Comment: accepted in AAAI202
    • …
    corecore