12,214 research outputs found
Effect of finite computational domain on turbulence scaling law in both physical and spectral spaces
The well-known translation between the power law of the energy spectrum and that of the correlation function or the second order structure function has been widely used in analyzing random data. Here, we show that the translation is valid only in proper scaling regimes. The regimes of valid translation are different for the correlation function and the structure function. Indeed, they do not overlap. Furthermore, in practice, the power laws exist only for a finite range of scales. We show that this finite range makes the translation inexact even in the proper scaling regime. The error depends on the scaling exponent. The current findings are applicable to data analysis in fluid turbulence and other stochastic systems
Retractions and Gorenstein homological properties
We associate to a localizable module a left retraction of algebras; it is a
homological ring epimorphism that preserves singularity categories. We study
the behavior of left retractions with respect to Gorenstein homological
properties (for example, being Gorenstein algebras or CM-free).
We apply the results to Nakayama algebras. It turns out that for a connected
Nakayama algebra , there exists a connected self-injective Nakayama algebra
such that there is a sequence of left retractions linking to ; in
particular, the singularity category of is triangle equivalent to the
stable category of . We classify connected Nakayama algebras with at most
three simple modules according to Gorenstein homological properties
Demonstration of Einstein-Podolsky-Rosen Steering with Enhanced Subchannel Discrimination
Einstein-Podolsky-Rosen (EPR) steering describes a quantum nonlocal
phenomenon in which one party can nonlocally affect the other's state through
local measurements. It reveals an additional concept of quantum nonlocality,
which stands between quantum entanglement and Bell nonlocality. Recently, a
quantum information task named as subchannel discrimination (SD) provides a
necessary and sufficient characterization of EPR steering. The success
probability of SD using steerable states is higher than using any unsteerable
states, even when they are entangled. However, the detailed construction of
such subchannels and the experimental realization of the corresponding task are
still technologically challenging. In this work, we designed a feasible
collection of subchannels for a quantum channel and experimentally demonstrated
the corresponding SD task where the probabilities of correct discrimination are
clearly enhanced by exploiting steerable states. Our results provide a concrete
example to operationally demonstrate EPR steering and shine a new light on the
potential application of EPR steering.Comment: 16 pages, 8 figures, appendix include
Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks
Factorization Machines (FMs) are a supervised learning approach that enhances
the linear regression model by incorporating the second-order feature
interactions. Despite effectiveness, FM can be hindered by its modelling of all
feature interactions with the same weight, as not all feature interactions are
equally useful and predictive. For example, the interactions with useless
features may even introduce noises and adversely degrade the performance. In
this work, we improve FM by discriminating the importance of different feature
interactions. We propose a novel model named Attentional Factorization Machine
(AFM), which learns the importance of each feature interaction from data via a
neural attention network. Extensive experiments on two real-world datasets
demonstrate the effectiveness of AFM. Empirically, it is shown on regression
task AFM betters FM with a relative improvement, and consistently
outperforms the state-of-the-art deep learning methods Wide&Deep and DeepCross
with a much simpler structure and fewer model parameters. Our implementation of
AFM is publicly available at:
https://github.com/hexiangnan/attentional_factorization_machineComment: 7 pages, 5 figure
Monomial Hopf Algebras
Let be a field of characteristic 0 containing all roots of unity. We
classify all the Hopf structures on monomial -coalgebras, or, in dual
version, on monomial -algebras.Comment: 24 page
Graph Contrastive Learning with Cohesive Subgraph Awareness
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy
for learning representations of diverse graphs including social and biomedical
networks. GCL widely uses stochastic graph topology augmentation, such as
uniform node dropping, to generate augmented graphs. However, such stochastic
augmentations may severely damage the intrinsic properties of a graph and
deteriorate the following representation learning process. We argue that
incorporating an awareness of cohesive subgraphs during the graph augmentation
and learning processes has the potential to enhance GCL performance. To this
end, we propose a novel unified framework called CTAug, to seamlessly integrate
cohesion awareness into various existing GCL mechanisms. In particular, CTAug
comprises two specialized modules: topology augmentation enhancement and graph
learning enhancement. The former module generates augmented graphs that
carefully preserve cohesion properties, while the latter module bolsters the
graph encoder's ability to discern subgraph patterns. Theoretical analysis
shows that CTAug can strictly improve existing GCL mechanisms. Empirical
experiments verify that CTAug can achieve state-of-the-art performance for
graph representation learning, especially for graphs with high degrees. The
code is available at https://doi.org/10.5281/zenodo.10594093, or
https://github.com/wuyucheng2002/CTAug
- …