255 research outputs found

    Explore the Power of Dropout on Few-shot Learning

    Full text link
    The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is a regularization technique used in traditional deep learning methods. In this paper, we explore the power of dropout on few-shot learning and provide some insights about how to use it. Extensive experiments on the few-shot object detection and few-shot image classification datasets, i.e., Pascal VOC, MS COCO, CUB, and mini-ImageNet, validate the effectiveness of our method.Comment: arXiv admin note: substantial text overlap with arXiv:2210.0640

    Isospin-dependent pairing interaction from nuclear matter calculations

    Get PDF
    The isospin dependence of the effective pairing interaction is discussed on the basis of the Bardeen, Cooper, and Schrieffer theory of superfluid asymmetric nuclear matter. It is shown that the energy gap, calculated within the mean field approximation in the range from symmetric nuclear matter to pure neutron matter, is not linearly dependent on the symmetry parameter owing to the nonlinear structure of the gap equation. Moreover, the construction of a zero-range effective pairing interaction compatible with the neutron and proton gaps in homogeneous matter is investigated, along with some recent proposals of isospin dependence tested on the nuclear data table

    Towards Zero-Shot Personalized Table-to-Text Generation with Contrastive Persona Distillation

    Full text link
    Existing neural methods have shown great potentials towards generating informative text from structured tabular data as well as maintaining high content fidelity. However, few of them shed light on generating personalized expressions, which often requires well-aligned persona-table-text datasets that are difficult to obtain. To overcome these obstacles, we explore personalized table-to-text generation under a zero-shot setting, by assuming no well-aligned persona-table-text triples are required during training. To this end, we firstly collect a set of unpaired persona information and then propose a semi-supervised approach with contrastive persona distillation (S2P-CPD) to generate personalized context. Specifically, tabular data and persona information are firstly represented as latent variables separately. Then, we devise a latent space fusion technique to distill persona information into the table representation. Besides, a contrastive-based discriminator is employed to guarantee the style consistency between the generated context and its corresponding persona. Experimental results on two benchmarks demonstrate S2P-CPD's ability on keeping both content fidelity and personalized expressions.Comment: Accepted by ICASSP 202

    Deep Autoencoder Neural Networks for Short-Term Traffic Congestion Prediction of Transportation Networks

    Get PDF
    Traffic congestion prediction is critical for implementing intelligent transportation systems for improving the efficiency and capacity of transportation networks. However, despite its importance, traffic congestion prediction is severely less investigated compared to traffic flow prediction, which is partially due to the severe lack of large-scale high-quality traffic congestion data and advanced algorithms. This paper proposes an accessible and general workflow to acquire large-scale traffic congestion data and to create traffic congestion datasets based on image analysis. With this workflow we create a dataset named Seattle Area Traffic Congestion Status (SATCS) based on traffic congestion map snapshots from a publicly available online traffic service provider Washington State Department of Transportation. We then propose a deep autoencoder-based neural network model with symmetrical layers for the encoder and the decoder to learn temporal correlations of a transportation network and predicting traffic congestion. Our experimental results on the SATCS dataset show that the proposed DCPN model can efficiently and effectively learn temporal relationships of congestion levels of the transportation network for traffic congestion forecasting. Our method outperforms two other state-of-the-art neural network models in prediction performance, generalization capability, and computation efficiency

    Transient analysis of arm locking controller

    Full text link
    Arm locking is one of the key technologies to suppress the laser phase noise in spaced-based gravitational waves observatories. Since arm locking was proposed, phase margin criterion was always used as the fundamental design strategy for the controller development. In this paper, we find that this empirical method from engineering actually cannot guarantee the arm locking stability. Therefore, most of the advanced arm locking controllers reported so far may have instable problems. After comprehensive analysis of the single arm locking's transient responses, strict analytical stability criterions are summarized for the first time. These criterions are then generalized to dual arm locking, modified-dual arm locking and common arm locking, and special considerations for the design of arm locking controllers in different architectures are also discussed. It is found that PI controllers can easily meet our stability criterions in most of the arm locking systems. Using a simple high gain PI controller, it is possible to suppress the laser phase noise by 5 orders of magnitude within the science band. Our stability criterions can also be used in other feedback systems, where several modules with different delays are connected in parallel.Comment: 24 pages, 24 figure

    Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders

    Full text link
    Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization. However, the traditional two-step paradigm of pre-training and sentence-ranking, creates a gap due to differing optimization objectives. To address this issue, we argue that utilizing pre-trained embeddings derived from a process specifically designed to optimize cohensive and distinctive sentence representations helps rank significant sentences. To do so, we propose a novel graph pre-training auto-encoder to obtain sentence embeddings by explicitly modelling intra-sentential distinctive features and inter-sentential cohesive features through sentence-word bipartite graphs. These pre-trained sentence representations are then utilized in a graph-based ranking algorithm for unsupervised summarization. Our method produces predominant performance for unsupervised summarization frameworks by providing summary-worthy sentence representations. It surpasses heavy BERT- or RoBERTa-based sentence representations in downstream tasks.Comment: Accepted by the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023
    corecore