287 research outputs found

    Kondo Signatures of a Quantum Magnetic Impurity in Topological Superconductors

    Full text link
    We study the Kondo physics of a quantum magnetic impurity in two-dimensional topological superconductors (TSCs), either intrinsic or induced on the surface of a bulk topological insulator, using a numerical renormalization group technique. We show that, despite sharing the p+ip pairing symmetry, intrinsic and extrinsic TSCs host different physical processes that produce distinct Kondo signatures. Extrinsic TSCs harbor an unusual screening mechanism involving both electron and orbital degrees of freedom that produces rich and prominent Kondo phenomena, especially an intriguing pseudospin Kondo singlet state in the superconducting gap and a spatially anisotropic spin correlation. In sharp contrast, intrinsic TSCs support a robust impurity spin doublet ground state and an isotropic spin correlation. These findings advance fundamental knowledge of novel Kondo phenomena in TSCs and suggest experimental avenues for their detection and distinction

    Acute effects of kinesiology tape tension on soleus muscle h-reflex modulations during lying and standing postures

    Get PDF
    Copyright: © 2020 Chen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Kinesiology tape (KT) has been widely used in the areas of sports and rehabilitation. However, there is no gold standard for the tape tension used during a KT application. The purpose of this study was to examine the effects of KT application with different tension intensities on soleus muscle Hoffmann-reflex (H-reflex) modulation during lying and standing postures. Fifteen healthy university students were tested with 3 tape tension intensities during separate visits with a randomized sequence: tape-on no tension (0KT), moderate (about 50% of the maximal tape tension: (ModKT), and maximal tape tension (MaxKT). During each experimental visit, the H-reflex measurements on the soleus muscle were taken before, during, and after the KT application for both lying and standing postures. The H-wave and M-wave recruitment curves were generated using surface electromyography (EMG). There was a main effect for posture (p = 0.001) for the maximal peak-to-peak amplitude of the H-wave and M-wave (Hmax/Mmax) ratio, showing the depressed Hmax/Mmax ratio during standing, when compared to the lying posture. Even though the tension factor had a large effect (ηp2 = 0.165), different tape tensions showed no significant differential effects for the Hmax/Mmax ratio. The spinal motoneuron excitability was not altered, even during the maximal tension KT application on the soleus muscle. Thus, the tension used during a KT application should not be a concern in terms of modulating the sensorimotor activity ascribed to elastic taping during lying and standing postures

    How Does Attention Work in Vision Transformers? A Visual Analytics Attempt

    Full text link
    Vision transformer (ViT) expands the success of transformer models from sequential data to images. The model decomposes an image into many smaller patches and arranges them into a sequence. Multi-head self-attentions are then applied to the sequence to learn the attention between patches. Despite many successful interpretations of transformers on sequential data, little effort has been devoted to the interpretation of ViTs, and many questions remain unanswered. For example, among the numerous attention heads, which one is more important? How strong are individual patches attending to their spatial neighbors in different heads? What attention patterns have individual heads learned? In this work, we answer these questions through a visual analytics approach. Specifically, we first identify what heads are more important in ViTs by introducing multiple pruning-based metrics. Then, we profile the spatial distribution of attention strengths between patches inside individual heads, as well as the trend of attention strengths across attention layers. Third, using an autoencoder-based learning solution, we summarize all possible attention patterns that individual heads could learn. Examining the attention strengths and patterns of the important heads, we answer why they are important. Through concrete case studies with experienced deep learning experts on multiple ViTs, we validate the effectiveness of our solution that deepens the understanding of ViTs from head importance, head attention strength, and head attention pattern.Comment: Accepted by PacificVis 2023 and selected to be published in TVC

    PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs

    Full text link
    Pre-training on large models is prevalent and emerging with the ever-growing user-generated content in many machine learning application categories. It has been recognized that learning contextual knowledge from the datasets depicting user-content interaction plays a vital role in downstream tasks. Despite several studies attempting to learn contextual knowledge via pre-training methods, finding an optimal training objective and strategy for this type of task remains a challenging problem. In this work, we contend that there are two distinct aspects of contextual knowledge, namely the user-side and the content-side, for datasets where user-content interaction can be represented as a bipartite graph. To learn contextual knowledge, we propose a pre-training method that learns a bi-directional mapping between the spaces of the user-side and the content-side. We formulate the training goal as a contrastive learning task and propose a dual-Transformer architecture to encode the contextual knowledge. We evaluate the proposed method for the recommendation task. The empirical studies have demonstrated that the proposed method outperformed all the baselines with significant gains

    Loop invariant synthesis in a combined abstract domain

    Get PDF
    Automated verification of memory safety and functional correctness for heap-manipulating programs has been a challenging task, especially when dealing with complex data structures with strong invariants involving both shape and numerical properties. Existing verification systems usually rely on users to supply annotations to guide the verification, which can be cumbersome and error-prone by hand and can significantly restrict the usability of the verification system. In this paper, we reduce the need for some user annotations by automatically inferring loop invariants over an abstract domain with both shape and numerical information. Our loop invariant synthesis is conducted automatically by a fixed-point iteration process, equipped with newly designed abstraction mechanism, together with join and widening operators over the combined domain. We have also proven the soundness and termination of our approach. Initial experiments confirm that we can synthesise loop invariants with non-trivial constraints

    FATA-Trans: Field And Time-Aware Transformer for Sequential Tabular Data

    Full text link
    Sequential tabular data is one of the most commonly used data types in real-world applications. Different from conventional tabular data, where rows in a table are independent, sequential tabular data contains rich contextual and sequential information, where some fields are dynamically changing over time and others are static. Existing transformer-based approaches analyzing sequential tabular data overlook the differences between dynamic and static fields by replicating and filling static fields into each transformer, and ignore temporal information between rows, which leads to three major disadvantages: (1) computational overhead, (2) artificially simplified data for masked language modeling pre-training task that may yield less meaningful representations, and (3) disregarding the temporal behavioral patterns implied by time intervals. In this work, we propose FATA-Trans, a model with two field transformers for modeling sequential tabular data, where each processes static and dynamic field information separately. FATA-Trans is field- and time-aware for sequential tabular data. The field-type embedding in the method enables FATA-Trans to capture differences between static and dynamic fields. The time-aware position embedding exploits both order and time interval information between rows, which helps the model detect underlying temporal behavior in a sequence. Our experiments on three benchmark datasets demonstrate that the learned representations from FATA-Trans consistently outperform state-of-the-art solutions in the downstream tasks. We also present visualization studies to highlight the insights captured by the learned representations, enhancing our understanding of the underlying data. Our codes are available at https://github.com/zdy93/FATA-Trans.Comment: This work is accepted by ACM International Conference on Information and Knowledge Management (CIKM) 202

    Multitask Learning for Time Series Data with 2D Convolution

    Full text link
    Multitask learning (MTL) aims to develop a unified model that can handle a set of closely related tasks simultaneously. By optimizing the model across multiple tasks, MTL generally surpasses its non-MTL counterparts in terms of generalizability. Although MTL has been extensively researched in various domains such as computer vision, natural language processing, and recommendation systems, its application to time series data has received limited attention. In this paper, we investigate the application of MTL to the time series classification (TSC) problem. However, when we integrate the state-of-the-art 1D convolution-based TSC model with MTL, the performance of the TSC model actually deteriorates. By comparing the 1D convolution-based models with the Dynamic Time Warping (DTW) distance function, it appears that the underwhelming results stem from the limited expressive power of the 1D convolutional layers. To overcome this challenge, we propose a novel design for a 2D convolution-based model that enhances the model's expressiveness. Leveraging this advantage, our proposed method outperforms competing approaches on both the UCR Archive and an industrial transaction TSC dataset

    Toward a Foundation Model for Time Series Data

    Full text link
    A foundation model is a machine learning model trained on a large and diverse set of data, typically using self-supervised learning-based pre-training techniques, that can be adapted to various downstream tasks. However, current research on time series pre-training has mostly focused on models pre-trained solely on data from a single domain, resulting in a lack of knowledge about other types of time series. However, current research on time series pre-training has predominantly focused on models trained exclusively on data from a single domain. As a result, these models possess domain-specific knowledge that may not be easily transferable to time series from other domains. In this paper, we aim to develop an effective time series foundation model by leveraging unlabeled samples from multiple domains. To achieve this, we repurposed the publicly available UCR Archive and evaluated four existing self-supervised learning-based pre-training methods, along with a novel method, on the datasets. We tested these methods using four popular neural network architectures for time series to understand how the pre-training methods interact with different network designs. Our experimental results show that pre-training improves downstream classification tasks by enhancing the convergence of the fine-tuning process. Furthermore, we found that the proposed pre-training method, when combined with the Transformer model, outperforms the alternatives

    An Efficient Content-based Time Series Retrieval System

    Full text link
    A Content-based Time Series Retrieval (CTSR) system is an information retrieval system for users to interact with time series emerged from multiple domains, such as finance, healthcare, and manufacturing. For example, users seeking to learn more about the source of a time series can submit the time series as a query to the CTSR system and retrieve a list of relevant time series with associated metadata. By analyzing the retrieved metadata, users can gather more information about the source of the time series. Because the CTSR system is required to work with time series data from diverse domains, it needs a high-capacity model to effectively measure the similarity between different time series. On top of that, the model within the CTSR system has to compute the similarity scores in an efficient manner as the users interact with the system in real-time. In this paper, we propose an effective and efficient CTSR model that outperforms alternative models, while still providing reasonable inference runtimes. To demonstrate the capability of the proposed method in solving business problems, we compare it against alternative models using our in-house transaction data. Our findings reveal that the proposed model is the most suitable solution compared to others for our transaction data problem
    corecore