5,519 research outputs found
The hidden subgroup problem and quantum computation using group representations
The hidden subgroup problem is the foundation of many quantum algorithms. An efficient solution is known for the problem over abelian groups, employed by both Simon's algorithm and Shor's factoring and discrete log algorithms. The nonabelian case, however, remains open; an efficient solution would give rise to an efficient quantum algorithm for graph isomorphism. We fully analyze a natural generalization of the algorithm for the abelian case to the nonabelian case and show that the algorithm determines the normal core of a hidden subgroup: in particular, normal subgroups can be determined. We show, however, that this immediate generalization of the abelian algorithm does not efficiently solve graph isomorphism
Recommended from our members
A Rare Case of Hip Pain Secondary to Pigmented Villonodular Synovitis
A 19-year-old Asian male presented to our emergency department with atraumatic right hip pain radiating to the right groin associated with pain on ambulation. Magnetic resonance imaging of the right hip with and without contrast revealed the diagnosis. Pigmented villonodular synovitis is a rare, monoarticular benign tumor originating from the synovium of the joint. The treatment is synovectomy of the pathological joint to prevent further disease progression
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
In recent studies, linear recurrent neural networks (LRNNs) have achieved
Transformer-level performance in natural language modeling and long-range
modeling while offering rapid parallel training and constant inference costs.
With the resurged interest in LRNNs, we study whether they can learn the hidden
rules in training sequences, such as the grammatical structures of regular
language. We theoretically analyze some existing LRNNs and discover their
limitations on regular language. Motivated by the analysis, we propose a new
LRNN equipped with a block-diagonal and input-dependent transition matrix.
Experiments suggest that the proposed model is the only LRNN that can perform
length extrapolation on regular language tasks such as Sum, Even Pair, and
Modular Arithmetic.Comment: The first two authors contributed equally to this wor
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
An ideal length-extrapolatable Transformer language model can handle
sequences longer than the training length without any fine-tuning. Such
long-context utilization capability relies heavily on a flexible positional
embedding design. Upon investigating the flexibility of existing large
pre-trained Transformer language models, we find that the T5 family deserves a
closer look, as its positional embeddings capture rich and flexible attention
patterns. However, T5 suffers from the dispersed attention issue: the longer
the input sequence, the flatter the attention distribution. To alleviate the
issue, we propose two attention alignment strategies via temperature scaling.
Our findings show improvement on the long-context utilization capability of T5
on language modeling, retrieval, multi-document question answering, and code
completion tasks without any fine-tuning. This suggests that a flexible
positional embedding design and attention alignment can go a long way toward
Transformer length extrapolation
'The last channel': vision at the temporal margin of the field.
The human visual field, on the temporal side, extends to at least 90° from the line of sight. Using a two-alternative forced-choice procedure in which observers are asked to report the direction of motion of a Gabor patch, and taking precautions to exclude unconscious eye movements in the direction of the stimulus, we show that the limiting eccentricity of image-forming vision can be established with precision. There are large, but reliable, individual differences in the limiting eccentricity. The limiting eccentricity exhibits a dependence on log contrast; but it is not reduced when the modulation visible to the rods is attenuated, a result compatible with the histological evidence that the outermost part of the retina exhibits a high density of cones. Our working hypothesis is that only one type of neural channel is present in the far periphery of the retina, a channel that responds to temporally modulated stimuli of low spatial frequency and that is directionally selective.Evelyn Trus
A Patient With Foot Pain Found to Have Leriche Syndrome: A Case Report and Brief Review of the Literature.
Leriche syndrome, a rare and critical complication of peripheral arterial disease (PAD), affects the distal abdominal aorta (infrarenal) and, similar to PAD, is a result of plaque buildup in the arterial lumen. The Leriche syndrome triad includes claudication in the proximal lower extremity, decreased or absent femoral pulses, and, in some cases, impotence. This article presents a patient with an atypical presentation of foot pain who was subsequently found to have Leriche syndrome. The patient was a 59-year-old female, a former smoker, who presented to the emergency department (ED) with atraumatic, acute right foot pain. All right lower extremity pulses were faintly audible on bedside Doppler. Computed tomography with angiography of the abdominal aorta revealed a Leriche-type occlusion of the infrarenal abdominal aorta and left common iliac and a 10 cm right popliteal arterial occlusion. Pharmacological anticoagulation was initiated by the ED. Definitive treatment in this patient included catheter-directed tissue plasminogen activator lysis to the thrombus on the right and placement of kissing stents in the distal aorta without complication. The patient made an excellent recovery and had a complete resolution of her symptoms. PAD is an omnipresent condition and, when untreated, can result in a myriad of high mortality and morbidity conditions such as Leriche syndrome. Collateral vessel formation can make the symptoms of Leriche syndrome vague and inconsistent, often making early recognition difficult. Optimal outcomes hinge on the clinician\u27s ability to efficiently recognize, diagnose, stabilize, and coordinate multidisciplinary involvement of vascular and interventional radiology specialties. Case reports such as this one help to illuminate some of the more infrequent presentations of Leriche syndrome
Permutationless Many-Jet Event Reconstruction with Symmetry Preserving Attention Networks
Top quarks, produced in large numbers at the Large Hadron Collider, have a
complex detector signature and require special reconstruction techniques. The
most common decay mode, the "all-jet" channel, results in a 6-jet final state
which is particularly difficult to reconstruct in collisions due to the
large number of permutations possible. We present a novel approach to this
class of problem, based on neural networks using a generalized attention
mechanism, that we call Symmetry Preserving Attention Networks (SPA-Net). We
train one such network to identify the decay products of each top quark
unambiguously and without combinatorial explosion as an example of the power of
this technique.This approach significantly outperforms existing
state-of-the-art methods, correctly assigning all jets in of -jet,
of -jet, and of -jet events respectively.Comment: 8pages, submitted to PRL, revised version with updated result
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
The use of positional embeddings in transformer language models is widely
accepted. However, recent research has called into question the necessity of
such embeddings. We further extend this inquiry by demonstrating that a
randomly initialized and frozen transformer language model, devoid of
positional embeddings, inherently encodes strong positional information through
the shrinkage of self-attention variance. To quantify this variance, we derive
the underlying distribution of each step within a transformer layer. Through
empirical validation using a fully pretrained model, we show that the variance
shrinkage effect still persists after extensive gradient updates. Our findings
serve to justify the decision to discard positional embeddings and thus
facilitate more efficient pretraining of transformer language models.Comment: Accepted by ACL 202
Kondo Conductance in an Atomic Nanocontact from First Principles
The electrical conductance of atomic metal contacts represents a powerful
tool to detect nanomagnetism. Conductance reflects magnetism through anomalies
at zero bias -- generally with Fano lineshapes -- due to the Kondo screening of
the magnetic impurity bridging the contact. A full atomic-level understanding
of this nutshell many-body system is of the greatest importance, especially in
view of our increasing need to control nanocurrents by means of magnetism.
Disappointingly, zero bias conductance anomalies are not presently calculable
from atomistic scratch. In this Letter we demonstrate a working route
connecting approximately but quantitatively density functional theory (DFT) and
numerical renormalization group (NRG) approaches and leading to a
first-principles conductance calculation for a nanocontact, exemplified by a Ni
impurity in a Au nanowire. A Fano-like conductance lineshape is obtained
microscopically, and shown to be controlled by the impurity s-level position.
We also find a relationship between conductance anomaly and geometry, and
uncover the possibility of opposite antiferromagnetic and ferromagnetic Kondo
screening -- the latter exhibiting a totally different and unexplored zero bias
anomaly. The present matching method between DFT and NRG should permit the
quantitative understanding and exploration of this larger variety of Kondo
phenomena at more general magnetic nanocontacts.Comment: 11 pages, 3 figures. Supplementary materials under request at
[email protected]
- …