281 research outputs found

    Topological phase transitions in multi-component superconductors

    Full text link
    We study the phase transition between a trivial and a time-reversal-invariant topological superconductor in a single-band system. By analyzing the interplay of symmetry, topology and energetics, we show that for a generic normal state band structure, the phase transition occurs via extended intermediate phases in which even- and odd-parity pairing components coexist. For inversion-symmetric systems, the coexistence phase spontaneously breaks time-reversal symmetry. For noncentrosymmetric superconductors, the low-temperature intermediate phase is time-reversal breaking, while the high-temperature phase preserves time-reversal symmetry and has topologically protected line nodes. Furthermore, with approximate rotational invariance, the system has an emergent U(1)×U(1)U(1) \times U(1) symmetry, and novel topological defects, such as half vortex lines binding Majorana fermions, can exist. We analytically solve for the dispersion of the Majorana fermion and show that it exhibit small and large velocities at low and high energies. Relevance of our theory to superconducting pyrochlore oxide Cd2_2Re2_2O7_7 and half-Heusler materials is discussed.Comment: 14 pages, 7 figures; to appear on Phys. Rev. Let

    Asset Pricing and Cost of Equity for US Banking Sector by CAPM and TFPM from 1987-2011

    Get PDF
    Although Capital Asset Pricing Model (CAPM), one-factor model, has strong theoretical basis and is easy to use and understand, analysts also consider other alternative models, such as Three Factor Pricing Model (TFPM) developed by Fama and French (1993). Because some differences between actual return and estimated return could be explained by the effect of capital size and book-to-market ratio. The objective of using these two similar but complementary models is to estimate the cost of equity for the US banking sector. In order to do the estimation, we would conduct the estimation of parameters for both individual bank and the whole banking sector

    An improved Siamese network for face sketch recognition

    Get PDF
    Face sketch recognition identifies the face photo from a large face sketch dataset. Some traditional methods are typically used to reduce the modality gap between face photos and sketches and gain excellent recognition rate based on a pseudo image which is synthesized using the corresponded face photo. However, these methods cannot obtain better high recognition rate for all face sketch datasets, because the use of extracted features cannot lead to the elimination of the effect of different modalities' images. The feature representation of the deep convolutional neural networks as a feasible approach for identification involves wider applications than other methods. It is adapted to extract the features which eliminate the difference between face photos and sketches. The recognition rate is high for neural networks constructed by learning optimal local features, even if the input image shows geometric distortions. However, the case of overfitting leads to the unsatisfactory performance of deep learning methods on face sketch recognition tasks. Also, the sketch images are too simple to be used for extracting effective features. This paper aims to increase the matching rate using the Siamese convolution network architecture. The framework is used to extract useful features from each image pair to reduce the modality gap. Moreover, data augmentation is used to avoid overfitting. We explore the performance of three loss functions and compare the similarity between each image pair. The experimental results show that our framework is adequate for a composite sketch dataset. In addition, it reduces the influence of overfitting by using data augmentation and modifying the network structure

    DualFormer: Local-Global Stratified Transformer for Efficient Video Recognition

    Full text link
    While transformers have shown great potential on video recognition with their strong capability of capturing long-range dependencies, they often suffer high computational costs induced by the self-attention to the huge number of 3D tokens. In this paper, we present a new transformer architecture termed DualFormer, which can efficiently perform space-time attention for video recognition. Concretely, DualFormer stratifies the full space-time attention into dual cascaded levels, i.e., to first learn fine-grained local interactions among nearby 3D tokens, and then to capture coarse-grained global dependencies between the query token and global pyramid contexts. Different from existing methods that apply space-time factorization or restrict attention computations within local windows for improving efficiency, our local-global stratification strategy can well capture both short- and long-range spatiotemporal dependencies, and meanwhile greatly reduces the number of keys and values in attention computation to boost efficiency. Experimental results verify the superiority of DualFormer on five video benchmarks against existing methods. In particular, DualFormer achieves 82.9%/85.2% top-1 accuracy on Kinetics-400/600 with ~1000G inference FLOPs which is at least 3.2x fewer than existing methods with similar performance. We have released the source code at https://github.com/sail-sg/dualformer.Comment: Accepted by ECCV 202

    UrbanFM: Inferring Fine-Grained Urban Flows

    Full text link
    Urban flow monitoring systems play important roles in smart city efforts around the world. However, the ubiquitous deployment of monitoring devices, such as CCTVs, induces a long-lasting and enormous cost for maintenance and operation. This suggests the need for a technology that can reduce the number of deployed devices, while preventing the degeneration of data accuracy and granularity. In this paper, we aim to infer the real-time and fine-grained crowd flows throughout a city based on coarse-grained observations. This task is challenging due to two reasons: the spatial correlations between coarse- and fine-grained urban flows, and the complexities of external impacts. To tackle these issues, we develop a method entitled UrbanFM based on deep neural networks. Our model consists of two major parts: 1) an inference network to generate fine-grained flow distributions from coarse-grained inputs by using a feature extraction module and a novel distributional upsampling module; 2) a general fusion subnet to further boost the performance by considering the influences of different external factors. Extensive experiments on two real-world datasets, namely TaxiBJ and HappyValley, validate the effectiveness and efficiency of our method compared to seven baselines, demonstrating the state-of-the-art performance of our approach on the fine-grained urban flow inference problem

    MSGNet: Learning Multi-Scale Inter-Series Correlations for Multivariate Time Series Forecasting

    Full text link
    Multivariate time series forecasting poses an ongoing challenge across various disciplines. Time series data often exhibit diverse intra-series and inter-series correlations, contributing to intricate and interwoven dependencies that have been the focus of numerous studies. Nevertheless, a significant research gap remains in comprehending the varying inter-series correlations across different time scales among multiple time series, an area that has received limited attention in the literature. To bridge this gap, this paper introduces MSGNet, an advanced deep learning model designed to capture the varying inter-series correlations across multiple time scales using frequency domain analysis and adaptive graph convolution. By leveraging frequency domain analysis, MSGNet effectively extracts salient periodic patterns and decomposes the time series into distinct time scales. The model incorporates a self-attention mechanism to capture intra-series dependencies, while introducing an adaptive mixhop graph convolution layer to autonomously learn diverse inter-series correlations within each time scale. Extensive experiments are conducted on several real-world datasets to showcase the effectiveness of MSGNet. Furthermore, MSGNet possesses the ability to automatically learn explainable multi-scale inter-series correlations, exhibiting strong generalization capabilities even when applied to out-of-distribution samples.Comment: 13 pages, 12 figure

    Anomaly Detection by Adapting a pre-trained Vision Language Model

    Full text link
    Recently, large vision and language models have shown their success when adapting them to many downstream tasks. In this paper, we present a unified framework named CLIP-ADA for Anomaly Detection by Adapting a pre-trained CLIP model. To this end, we make two important improvements: 1) To acquire unified anomaly detection across industrial images of multiple categories, we introduce the learnable prompt and propose to associate it with abnormal patterns through self-supervised learning. 2) To fully exploit the representation power of CLIP, we introduce an anomaly region refinement strategy to refine the localization quality. During testing, the anomalies are localized by directly calculating the similarity between the representation of the learnable prompt and the image. Comprehensive experiments demonstrate the superiority of our framework, e.g., we achieve the state-of-the-art 97.5/55.6 and 89.3/33.1 on MVTec-AD and VisA for anomaly detection and localization. In addition, the proposed method also achieves encouraging performance with marginal training data, which is more challenging
    • …
    corecore