482,208 research outputs found
On quantum vertex algebras and their modules
We give a survey on the developments in a certain theory of quantum vertex
algebras, including a conceptual construction of quantum vertex algebras and
their modules and a connection of double Yangians and Zamolodchikov-Faddeev
algebras with quantum vertex algebras.Comment: 18 pages; contribution to the proceedings of the conference in honor
of Professor Geoffrey Maso
Modules-at-infinity for quantum vertex algebras
This is a sequel to \cite{li-qva1} and \cite{li-qva2} in a series to study
vertex algebra-like structures arising from various algebras such as quantum
affine algebras and Yangians. In this paper, we study two versions of the
double Yangian , denoted by and
with a nonzero complex number. For each nonzero
complex number , we construct a quantum vertex algebra and prove
that every -module is naturally a -module. We also show
that -modules are what we call
-modules-at-infinity. To achieve this goal, we study what we call
-local subsets and quasi-local subsets of \Hom (W,W((x^{-1}))) for any
vector space , and we prove that any -local subset generates a (weak)
quantum vertex algebra and that any quasi-local subset generates a vertex
algebra with as a (left) quasi module-at-infinity. Using this result we
associate the Lie algebra of pseudo-differential operators on the circle with
vertex algebras in terms of quasi modules-at-infinity.Comment: Latex, 48 page
Recommended from our members
A classification of emerging and traditional grid systems
The grid has evolved in numerous distinct phases. It started in the early ’90s as a model of metacomputing in which supercomputers share resources; subsequently, researchers added the ability to share data. This is usually referred to as the first-generation grid. By the late ’90s, researchers had outlined the framework for second-generation grids, characterized by their use of grid middleware systems to “glue” different grid technologies together. Third-generation grids originated in the early millennium when Web technology was combined with second-generation grids. As a result, the invisible grid, in which grid complexity is fully hidden through resource virtualization, started receiving attention. Subsequently, grid researchers identified the requirement for semantically rich knowledge grids, in which middleware technologies are more intelligent and autonomic. Recently, the necessity for grids to support and extend the ambient intelligence vision has emerged. In AmI, humans are surrounded by computing technologies that are unobtrusively embedded in their surroundings.
However, third-generation grids’ current architecture doesn’t meet the requirements of next-generation grids (NGG) and service-oriented knowledge utility (SOKU).4 A few years ago, a group of independent experts, arranged by the European Commission, identified these shortcomings as a way to identify potential European grid research priorities for 2010 and beyond. The experts envision grid systems’ information, knowledge, and processing capabilities as a set of utility services.3 Consequently, new grid systems are emerging to materialize these visions. Here, we review emerging grids and classify them to motivate further research and help establish a solid foundation in this rapidly evolving area
Spontaneous and Superfluid Chiral Edge States in Exciton-Polariton Condensates
We present a scheme of interaction-induced topological bandstructures based
on the spin anisotropy of exciton-polaritons in semiconductor microcavities. We
predict theoretically that this scheme allows the engineering of topological
gaps, without requiring a magnetic field or strong spin-orbit interaction
(transverse electric-transverse magnetic splitting). Under non-resonant
pumping, we find that an initially topologically trivial system undergoes a
topological transition upon the spontaneous breaking of phase symmetry
associated with polariton condensation. Under resonant coherent pumping, we
find that it is also possible to engineer a topological dispersion that is
linear in wavevector -- a property associated with polariton superfluidity.Comment: 6 pages, 4 figure
Spectral gene set enrichment (SGSE)
Motivation: Gene set testing is typically performed in a supervised context
to quantify the association between groups of genes and a clinical phenotype.
In many cases, however, a gene set-based interpretation of genomic data is
desired in the absence of a phenotype variable. Although methods exist for
unsupervised gene set testing, they predominantly compute enrichment relative
to clusters of the genomic variables with performance strongly dependent on the
clustering algorithm and number of clusters. Results: We propose a novel
method, spectral gene set enrichment (SGSE), for unsupervised competitive
testing of the association between gene sets and empirical data sources. SGSE
first computes the statistical association between gene sets and principal
components (PCs) using our principal component gene set enrichment (PCGSE)
method. The overall statistical association between each gene set and the
spectral structure of the data is then computed by combining the PC-level
p-values using the weighted Z-method with weights set to the PC variance scaled
by Tracey-Widom test p-values. Using simulated data, we show that the SGSE
algorithm can accurately recover spectral features from noisy data. To
illustrate the utility of our method on real data, we demonstrate the superior
performance of the SGSE method relative to standard cluster-based techniques
for testing the association between MSigDB gene sets and the variance structure
of microarray gene expression data. Availability:
http://cran.r-project.org/web/packages/PCGSE/index.html Contact:
[email protected] or [email protected]
Empirical risk minimization as parameter choice rule for general linear regularization methods.
We consider the statistical inverse problem to recover f from noisy measurements Y = Tf + sigma xi where xi is Gaussian white noise and T a compact operator between Hilbert spaces. Considering general reconstruction methods of the form (f) over cap (alpha) = q(alpha) (T*T)T*Y with an ordered filter q(alpha), we investigate the choice of the regularization parameter alpha by minimizing an unbiased estiate of the predictive risk E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. The corresponding parameter alpha(pred) and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk E[parallel to f - (f) over cap (alpha pred)parallel to(2)] with the oracle prediction risk inf(alpha>0) E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. From this oracle inequality we are then able to conclude that the investigated parameter choice rule is of optimal order in the minimax sense. Finally we also present numerical simulations, which support the order optimality of the method and the quality of the parameter choice in finite sample situations
Principal component gene set enrichment (PCGSE)
Motivation: Although principal component analysis (PCA) is widely used for
the dimensional reduction of biomedical data, interpretation of PCA results
remains daunting. Most existing methods attempt to explain each principal
component (PC) in terms of a small number of variables by generating
approximate PCs with few non-zero loadings. Although useful when just a few
variables dominate the population PCs, these methods are often inadequate for
characterizing the PCs of high-dimensional genomic data. For genomic data,
reproducible and biologically meaningful PC interpretation requires methods
based on the combined signal of functionally related sets of genes. While gene
set testing methods have been widely used in supervised settings to quantify
the association of groups of genes with clinical outcomes, these methods have
seen only limited application for testing the enrichment of gene sets relative
to sample PCs. Results: We describe a novel approach, principal component gene
set enrichment (PCGSE), for computing the statistical association between gene
sets and the PCs of genomic data. The PCGSE method performs a two-stage
competitive gene set test using the correlation between each gene and each PC
as the gene-level test statistic with flexible choice of both the gene set test
statistic and the method used to compute the null distribution of the gene set
statistic. Using simulated data with simulated gene sets and real gene
expression data with curated gene sets, we demonstrate that biologically
meaningful and computationally efficient results can be obtained from a simple
parametric version of the PCGSE method that performs a correlation-adjusted
two-sample t-test between the gene-level test statistics for gene set members
and genes not in the set. Availability:
http://cran.r-project.org/web/packages/PCGSE/index.html Contact:
[email protected] or [email protected]
factorization of exclusive processes
We prove factorization theorem in perturbative QCD (PQCD) for exclusive
processes by considering and . The relevant form factors are expressed as the convolution of hard
amplitudes with two-parton meson wave functions in the impact parameter
space, being conjugate to the parton transverse momenta . The point is
that on-shell valence partons carry longitudinal momenta initially, and acquire
through collinear gluon exchanges. The -dependent two-parton wave
functions with an appropriate path for the Wilson links are gauge-invariant.
The hard amplitudes, defined as the difference between the parton-level
diagrams of on-shell external particles and their collinear approximation, are
also gauge-invariant. We compare the predictions for two-body nonleptonic
meson decays derived from factorization (the PQCD approach) and from
collinear factorization (the QCD factorization approach).Comment: 11 pages, REVTEX, 5 figure
- …