357 research outputs found

    Client: Cross-variable Linear Integrated Enhanced Transformer for Multivariate Long-Term Time Series Forecasting

    Full text link
    Long-term time series forecasting (LTSF) is a crucial aspect of modern society, playing a pivotal role in facilitating long-term planning and developing early warning systems. While many Transformer-based models have recently been introduced for LTSF, a doubt have been raised regarding the effectiveness of attention modules in capturing cross-time dependencies. In this study, we design a mask-series experiment to validate this assumption and subsequently propose the "Cross-variable Linear Integrated ENhanced Transformer for Multivariate Long-Term Time Series Forecasting" (Client), an advanced model that outperforms both traditional Transformer-based models and linear models. Client employs linear modules to learn trend information and attention modules to capture cross-variable dependencies. Meanwhile, it simplifies the embedding and position encoding layers and replaces the decoder module with a projection layer. Essentially, Client incorporates non-linearity and cross-variable dependencies, which sets it apart from conventional linear models and Transformer-based models. Extensive experiments with nine real-world datasets have confirmed the SOTA performance of Client with the least computation time and memory consumption compared with the previous Transformer-based models. Our code is available at https://github.com/daxin007/Client

    MoE-AMC: Enhancing Automatic Modulation Classification Performance Using Mixture-of-Experts

    Full text link
    Automatic Modulation Classification (AMC) plays a vital role in time series analysis, such as signal classification and identification within wireless communications. Deep learning-based AMC models have demonstrated significant potential in this domain. However, current AMC models inadequately consider the disparities in handling signals under conditions of low and high Signal-to-Noise Ratio (SNR), resulting in an unevenness in their performance. In this study, we propose MoE-AMC, a novel Mixture-of-Experts (MoE) based model specifically crafted to address AMC in a well-balanced manner across varying SNR conditions. Utilizing the MoE framework, MoE-AMC seamlessly combines the strengths of LSRM (a Transformer-based model) for handling low SNR signals and HSRM (a ResNet-based model) for high SNR signals. This integration empowers MoE-AMC to achieve leading performance in modulation classification, showcasing its efficacy in capturing distinctive signal features under diverse SNR scenarios. We conducted experiments using the RML2018.01a dataset, where MoE-AMC achieved an average classification accuracy of 71.76% across different SNR levels, surpassing the performance of previous SOTA models by nearly 10%. This study represents a pioneering application of MoE techniques in the realm of AMC, offering a promising avenue for elevating signal classification accuracy within wireless communication systems

    Differentially Private Range Counting in Planar Graphs for Spatial Sensing

    Get PDF

    Learning with Constraint Learning: New Perspective, Solution Strategy and Various Applications

    Full text link
    The complexity of learning problems, such as Generative Adversarial Network (GAN) and its variants, multi-task and meta-learning, hyper-parameter learning, and a variety of real-world vision applications, demands a deeper understanding of their underlying coupling mechanisms. Existing approaches often address these problems in isolation, lacking a unified perspective that can reveal commonalities and enable effective solutions. Therefore, in this work, we proposed a new framework, named Learning with Constraint Learning (LwCL), that can holistically examine challenges and provide a unified methodology to tackle all the above-mentioned complex learning and vision problems. Specifically, LwCL is designed as a general hierarchical optimization model that captures the essence of these diverse learning and vision problems. Furthermore, we develop a gradient-response based fast solution strategy to overcome optimization challenges of the LwCL framework. Our proposed framework efficiently addresses a wide range of applications in learning and vision, encompassing three categories and nine different problem types. Extensive experiments on synthetic tasks and real-world applications verify the effectiveness of our approach. The LwCL framework offers a comprehensive solution for tackling complex machine learning and computer vision problems, bridging the gap between theory and practice

    Publishing Asynchronous Event Times with Pufferfish Privacy

    Get PDF

    Extending the unified subhalo model to warm dark matter

    Full text link
    Using a set of high-resolution N-body simulations, we extend the unified distribution model of cold dark matter (CDM) subhaloes to the warm dark matter(WDM) case. The same model framework combining the unevolved mass function, unevolved radial distribution, and tidal stripping can predict the mass function and spatial distribution of subhaloes in both CDM and WDM simulations. The dependence of the model on the DM particle property is universally parameterized through the half-mode mass of the initial power spectrum. Compared with the CDM model, the WDM model differs most notably in two aspects. 1) In contrast to the power-law form in CDM, the unevolved subhalo mass function for WDM is scale-dependent at the low mass end due to the cut-off in the initial power spectrum. 2) WDM subhaloes are more vulnerable to tidal stripping and disruption due to their lower concentrations at accretion time. Their survival rate is also found to depend on the infall mass. Accounting for these differences, the model predicts a final WDM subhalo mass function that is also proportional to the unevolved subhalo mass function. The radial distribution of WDM subhaloes is predicted to be mass-dependent. For low mass subhaloes, the radial distribution is flatter in the inner halo and steeper in the outer halo compared to the CDM counterpart, due to the scale-dependent unevolved mass function and the enhanced tidal stripping. The code for sampling subhaloes according to our generalized model is available at https://github.com/fhtouma/subgen2 .Comment: 15 pages, 14 figure
    • …
    corecore