217 research outputs found
Understanding quantum machine learning also requires rethinking generalization
Quantum machine learning models have shown successful generalization
performance even when trained with few data. In this work, through systematic
randomization experiments, we show that traditional approaches to understanding
generalization fail to explain the behavior of such quantum models. Our
experiments reveal that state-of-the-art quantum neural networks accurately fit
random states and random labeling of training data. This ability to memorize
random data defies current notions of small generalization error,
problematizing approaches that build on complexity measures such as the VC
dimension, the Rademacher complexity, and all their uniform relatives. We
complement our empirical results with a theoretical construction showing that
quantum neural networks can fit arbitrary labels to quantum states, hinting at
their memorization ability. Our results do not preclude the possibility of good
generalization with few training data but rather rule out any possible
guarantees based only on the properties of the model family. These findings
expose a fundamental challenge in the conventional understanding of
generalization in quantum machine learning and highlight the need for a
paradigm shift in the design of quantum models for machine learning tasks.Comment: 13+4 pages, 3 figure
Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip
Although Shannon theory states that it is asymptotically optimal to separate
the source and channel coding as two independent processes, in many practical
communication scenarios this decomposition is limited by the finite bit-length
and computational power for decoding. Recently, neural joint source-channel
coding (NECST) is proposed to sidestep this problem. While it leverages the
advancements of amortized inference and deep learning to improve the encoding
and decoding process, it still cannot always achieve compelling results in
terms of compression and error correction performance due to the limited
robustness of its learned coding networks. In this paper, motivated by the
inherent connections between neural joint source-channel coding and discrete
representation learning, we propose a novel regularization method called
Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of
the neural joint source-channel coding scheme. More specifically, on the
encoder side, we propose to explicitly maximize the mutual information between
the codeword and data; while on the decoder side, the amortized reconstruction
is regularized within an adversarial framework. Extensive experiments conducted
on various real-world datasets evidence that our IABF can achieve
state-of-the-art performances on both compression and error correction
benchmarks and outperform the baselines by a significant margin.Comment: AAAI202
An autonomous GNSS anti-spoofing technique
open3siIn recent years, the problem of Position, Navigation and Timing (PNT) resiliency has received significant attention due to an increasing awareness on threats and the vulnerability of the current GNSS signals. Several proposed solutions make uses of cryptography to protect against spoofing. A limitation of cryptographic techniques is that they introduce a communication and processing computation overhead and may impact the performance in terms of availability and continuity for GNSS users. This paper introduces autonomous non cryptographic antispoofing mechanisms, that exploit semi-codeless receiver techniques to detect spoofing for signals with a component making use of spreading code encryption.openCaparra, Gianluca; Wullems, Christian; Ioannides, Rigas T.Caparra, Gianluca; Wullems, Christian; Ioannides, Rigas T
Investigating the impact of mindfulness meditation training on working memory: A mathematical modeling approach
We investigated whether mindfulness training (MT) influences information processing in a working memory task with complex visual stimuli. Participants were tested before (T1) and after (T2) participation in an intensive one-month MT retreat, and their performance was compared with that of an age- and education-matched control group. Accuracy did not differ across groups at either time point. Response times were faster and significantly less variable in the MT versus the control group at T2. Since these results could be due to changes in mnemonic processes, speed–accuracy trade-off, or nondecisional factors (e.g., motor execution), we used a mathematical modeling approach to disentangle these factors. The EZ-diffusion model (Wagenmakers, van der Maas, & Grasman, Psychonomic Bulletin & Review 14:(1), 3–22, 2007) suggested that MT leads to improved information quality and reduced response conservativeness, with no changes in nondecisional factors. The noisy exemplar model further suggested that the increase in information quality reflected a decrease in encoding noise and not an increase in forgetting. Thus, mathematical modeling may help clarify the mechanisms by which MT produces salutary effects on performance
Detection of Human Vigilance State During Locomotion Using Wearable FNIRS
Human vigilance is a cognitive function that requires sustained attention toward change in the environment. Human vigilance detection is a widely investigated topic which can be accomplished by various approaches. Most studies have focused on stationary vigilance detection due to the high effect of interference such as motion artifacts which are prominent in common movements such as walking. Functional Near-Infrared Spectroscopy is a preferred modality in vigilance detection due to the safe nature, the low cost and ease of implementation. fNIRS is not immune to motion artifact interference, and therefore human vigilance detection performance would be severely degraded when studied during locomotion. Properly treating and removing walking-induced motion artifacts from the contaminated signals is crucial to ensure accurate vigilance detection. This study compared the vigilance level detection during both stationary and walking states and confirmed that the performance of vigilance level detection during walking is significantly deteriorated (with a
GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner
Graph self-supervised learning (SSL), including contrastive and generative
approaches, offers great potential to address the fundamental challenge of
label scarcity in real-world graph data. Among both sets of graph SSL
techniques, the masked graph autoencoders (e.g., GraphMAE)--one type of
generative method--have recently produced promising results. The idea behind
this is to reconstruct the node features (or structures)--that are randomly
masked from the input--with the autoencoder architecture. However, the
performance of masked feature reconstruction naturally relies on the
discriminability of the input features and is usually vulnerable to disturbance
in the features. In this paper, we present a masked self-supervised learning
framework GraphMAE2 with the goal of overcoming this issue. The idea is to
impose regularization on feature reconstruction for graph SSL. Specifically, we
design the strategies of multi-view random re-mask decoding and latent
representation prediction to regularize the feature reconstruction. The
multi-view random re-mask decoding is to introduce randomness into
reconstruction in the feature space, while the latent representation prediction
is to enforce the reconstruction in the embedding space. Extensive experiments
show that GraphMAE2 can consistently generate top results on various public
datasets, including at least 2.45% improvements over state-of-the-art baselines
on ogbn-Papers100M with 111M nodes and 1.6B edges.Comment: Accepted to WWW'2
- …