114 research outputs found

    Robustness measures for quantifying nonlocality

    Full text link
    We suggest generalized robustness for quantifying nonlocality and investigate its properties by comparing it with white-noise and standard robustness measures. As a result, we show that white-noise robustness does not fulfill monotonicity under local operation and shared randomness, whereas the other measures do. To compare the standard and generalized robustness measures, we introduce the concept of inequivalence, which indicates a reversal in the order relationship depending on the choice of monotones. From an operational perspective, the inequivalence of monotones for resourceful objects implies the absence of free operations that connect them. Applying this concept, we find that standard and generalized robustness measures are inequivalent between even- and odd-dimensional cases up to eight dimensions. This is obtained using randomly performed CGLMP measurement settings in a maximally entangled state. This study contributes to the resource theory of nonlocality and sheds light on comparing monotones by using the concept of inequivalence valid for all resource theories

    Feature-aligned N-BEATS with Sinkhorn divergence

    Full text link
    In this study, we propose Feature-aligned N-BEATS as a domain generalization model for univariate time series forecasting problems. The proposed model is an extension of the doubly residual stacking architecture of N-BEATS (Oreshkin et al. [34]) into a representation learning framework. The model is a new structure that involves marginal feature probability measures (i.e., pushforward measures of multiple source domains) induced by the intricate composition of residual operators of N-BEATS in each stack and aligns them stack-wise via an entropic regularized Wasserstein distance referred to as the Sinkhorn divergence (Genevay et al. [14]). The loss function consists of a typical forecasting loss for multiple source domains and an alignment loss calculated with the Sinkhorn divergence, which allows the model to learn invariant features stack-wise across multiple source data sequences while retaining N-BEATS's interpretable design. We conduct a comprehensive experimental evaluation of the proposed approach and the results demonstrate the model's forecasting and generalization capabilities in comparison with methods based on the original N-BEATS
    • …
    corecore