2,899 research outputs found

    Emergent localized states at the interface of a twofold PT\mathcal{PT}-symmetric lattice

    Full text link
    We consider the role of non-triviality resulting from a non-Hermitian Hamiltonian that conserves twofold PT-symmetry assembled by interconnections between a PT-symmetric lattice and its time reversal partner. Twofold PT-symmetry in the lattice produces additional surface exceptional points that play the role of new critical points, along with the bulk exceptional point. We show that there are two distinct regimes possessing symmetry-protected localized states, of which localization lengths are robust against external gain and loss. The states are demonstrated by numerical calculation of a quasi-1D ladder lattice and a 2D bilayered square lattice.Comment: 10 pages, 7 figure

    Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning

    Get PDF
    It has previously been shown that the relative reliability of model-based and model-free reinforcement-learning (RL) systems plays a role in the allocation of behavioral control between them. However, the role of task complexity in the arbitration between these two strategies remains largely unknown. Here, using a combination of novel task design, computational modelling, and model-based fMRI analysis, we examined the role of task complexity alongside state-space uncertainty in the arbitration process. Participants tended to increase model-based RL control in response to increasing task complexity. However, they resorted to model-free RL when both uncertainty and task complexity were high, suggesting that these two variables interact during the arbitration process. Computational fMRI revealed that task complexity interacts with neural representations of the reliability of the two systems in the inferior prefrontal cortex

    F^2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax

    Full text link
    Despite recent advances in neural text generation, encoding the rich diversity in human language remains elusive. We argue that the sub-optimal text generation is mainly attributable to the imbalanced token distribution, which particularly misdirects the learning model when trained with the maximum-likelihood objective. As a simple yet effective remedy, we propose two novel methods, F^2-Softmax and MefMax, for a balanced training even with the skewed frequency distribution. MefMax assigns tokens uniquely to frequency classes, trying to group tokens with similar frequencies and equalize frequency mass between the classes. F^2-Softmax then decomposes a probability distribution of the target token into a product of two conditional probabilities of (i) frequency class, and (ii) token from the target frequency class. Models learn more uniform probability distributions because they are confined to subsets of vocabularies. Significant performance gains on seven relevant metrics suggest the supremacy of our approach in improving not only the diversity but also the quality of generated texts.Comment: EMNLP 202

    Self-supervised debiasing using low rank regularization

    Full text link
    Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability. While most existing debiasing methods require full supervision on either spurious attributes or target labels, training a debiased model from a limited amount of both annotations is still an open question. To address this issue, we investigate an interesting phenomenon using the spectral analysis of latent representations: spuriously correlated attributes make neural networks inductively biased towards encoding lower effective rank representations. We also show that a rank regularization can amplify this bias in a way that encourages highly correlated features. Leveraging these findings, we propose a self-supervised debiasing framework potentially compatible with unlabeled samples. Specifically, we first pretrain a biased encoder in a self-supervised manner with the rank regularization, serving as a semantic bottleneck to enforce the encoder to learn the spuriously correlated attributes. This biased encoder is then used to discover and upweight bias-conflicting samples in a downstream task, serving as a boosting to effectively debias the main model. Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines and, in some cases, even outperforms state-of-the-art supervised debiasing approaches

    Small intestinal model for electrically propelled capsule endoscopy

    Get PDF
    The aim of this research is to propose a small intestine model for electrically propelled capsule endoscopy. The electrical stimulus can cause contraction of the small intestine and propel the capsule along the lumen. The proposed model considered the drag and friction from the small intestine using a thin walled model and Stokes' drag equation. Further, contraction force from the small intestine was modeled by using regression analysis. From the proposed model, the acceleration and velocity of various exterior shapes of capsule were calculated, and two exterior shapes of capsules were proposed based on the internal volume of the capsules. The proposed capsules were fabricated and animal experiments were conducted. One of the proposed capsules showed an average (SD) velocity in forward direction of 2.91 ± 0.99 mm/s and 2.23 ± 0.78 mm/s in the backward direction, which was 5.2 times faster than that obtained in previous research. The proposed model can predict locomotion of the capsule based on various exterior shapes of the capsule
    • …
    corecore