484,050 research outputs found
On quantum vertex algebras and their modules
We give a survey on the developments in a certain theory of quantum vertex
algebras, including a conceptual construction of quantum vertex algebras and
their modules and a connection of double Yangians and Zamolodchikov-Faddeev
algebras with quantum vertex algebras.Comment: 18 pages; contribution to the proceedings of the conference in honor
of Professor Geoffrey Maso
Modules-at-infinity for quantum vertex algebras
This is a sequel to \cite{li-qva1} and \cite{li-qva2} in a series to study
vertex algebra-like structures arising from various algebras such as quantum
affine algebras and Yangians. In this paper, we study two versions of the
double Yangian , denoted by and
with a nonzero complex number. For each nonzero
complex number , we construct a quantum vertex algebra and prove
that every -module is naturally a -module. We also show
that -modules are what we call
-modules-at-infinity. To achieve this goal, we study what we call
-local subsets and quasi-local subsets of \Hom (W,W((x^{-1}))) for any
vector space , and we prove that any -local subset generates a (weak)
quantum vertex algebra and that any quasi-local subset generates a vertex
algebra with as a (left) quasi module-at-infinity. Using this result we
associate the Lie algebra of pseudo-differential operators on the circle with
vertex algebras in terms of quasi modules-at-infinity.Comment: Latex, 48 page
Empirical risk minimization as parameter choice rule for general linear regularization methods.
We consider the statistical inverse problem to recover f from noisy measurements Y = Tf + sigma xi where xi is Gaussian white noise and T a compact operator between Hilbert spaces. Considering general reconstruction methods of the form (f) over cap (alpha) = q(alpha) (T*T)T*Y with an ordered filter q(alpha), we investigate the choice of the regularization parameter alpha by minimizing an unbiased estiate of the predictive risk E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. The corresponding parameter alpha(pred) and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk E[parallel to f - (f) over cap (alpha pred)parallel to(2)] with the oracle prediction risk inf(alpha>0) E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. From this oracle inequality we are then able to conclude that the investigated parameter choice rule is of optimal order in the minimax sense. Finally we also present numerical simulations, which support the order optimality of the method and the quality of the parameter choice in finite sample situations
Lattice Boltzmann modeling of multiphase flows at large density ratio with an improved pseudopotential model
Owing to its conceptual simplicity and computational efficiency, the
pseudopotential multiphase lattice Boltzmann (LB) model has attracted
significant attention since its emergence. In this work, we aim to extend the
pseudopotential LB model to simulate multiphase flows at large density ratio
and relatively high Reynolds number. First, based on our recent work [Li et
al., Phys. Rev. E. 86, 016709 (2012)], an improved forcing scheme is proposed
for the multiple-relaxation-time pseudopotential LB model in order to achieve
thermodynamic consistency and large density ratio in the model. Next, through
investigating the effects of the parameter a in the Carnahan-Starling equation
of state, we find that the interface thickness is approximately proportional to
1/sqrt(a). Using a smaller a will lead to a wider interface thickness, which
can reduce the spurious currents and enhance the numerical stability of the
pseudopotential model at large density ratio. Furthermore, it is found that a
lower liquid viscosity can be gained in the pseudopotential model by increasing
the kinematic viscosity ratio between the vapor and liquid phases. The improved
pseudopotential LB model is numerically validated via the simulations of
stationary droplet and droplet oscillation. Using the improved model as well as
the above treatments, numerical simulations of droplet splashing on a thin
liquid film are conducted at a density ratio in excess of 500 with Reynolds
numbers ranging from 40 to 1000. The dynamics of droplet splashing is correctly
reproduced and the predicted spread radius is found to obey the power law
reported in the literature.Comment: 9 figures, 2 tables, accepted by Physical Review E (in press
Recommended from our members
A classification of emerging and traditional grid systems
The grid has evolved in numerous distinct phases. It started in the early ’90s as a model of metacomputing in which supercomputers share resources; subsequently, researchers added the ability to share data. This is usually referred to as the first-generation grid. By the late ’90s, researchers had outlined the framework for second-generation grids, characterized by their use of grid middleware systems to “glue” different grid technologies together. Third-generation grids originated in the early millennium when Web technology was combined with second-generation grids. As a result, the invisible grid, in which grid complexity is fully hidden through resource virtualization, started receiving attention. Subsequently, grid researchers identified the requirement for semantically rich knowledge grids, in which middleware technologies are more intelligent and autonomic. Recently, the necessity for grids to support and extend the ambient intelligence vision has emerged. In AmI, humans are surrounded by computing technologies that are unobtrusively embedded in their surroundings.
However, third-generation grids’ current architecture doesn’t meet the requirements of next-generation grids (NGG) and service-oriented knowledge utility (SOKU).4 A few years ago, a group of independent experts, arranged by the European Commission, identified these shortcomings as a way to identify potential European grid research priorities for 2010 and beyond. The experts envision grid systems’ information, knowledge, and processing capabilities as a set of utility services.3 Consequently, new grid systems are emerging to materialize these visions. Here, we review emerging grids and classify them to motivate further research and help establish a solid foundation in this rapidly evolving area
Incubators vs Zombies: Fault-Tolerant, Short, Thin and Lanky Spanners for Doubling Metrics
Recently Elkin and Solomon gave a construction of spanners for doubling
metrics that has constant maximum degree, hop-diameter O(log n) and lightness
O(log n) (i.e., weight O(log n)w(MST). This resolves a long standing conjecture
proposed by Arya et al. in a seminal STOC 1995 paper.
However, Elkin and Solomon's spanner construction is extremely complicated;
we offer a simple alternative construction that is very intuitive and is based
on the standard technique of net tree with cross edges. Indeed, our approach
can be readily applied to our previous construction of k-fault tolerant
spanners (ICALP 2012) to achieve k-fault tolerance, maximum degree O(k^2),
hop-diameter O(log n) and lightness O(k^3 log n)
- …