306 research outputs found
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
Phonon-induced decoherence of the two-level quantum subsystem due to relaxation and dephasing processes
Phonon-related decoherence effects in a quantum double-well two-level
subsystem coupled to a solid are studied theoretically by the example of
deformation phonons. Expressions for the reduced density matrix at T=0 are
derived beyond the Markovian approximation by means of explicit solution of the
non-stationary Schrodinger equation for the interacting electron-phonon system
at the initial stage of its evolution. It is shown that as long as the
difference between the energies of the electron in the left and the right well
greatly exceeds the energy of the electron tunneling between the minima of the
double-well potential, decoherence is primarily due to dephasing processes.
This case corresponds to a strongly asymmetric potential and spatially
separated eigenfunctions localized in the vicinity of one or another potential
minimum. In the opposite case of the symmetric potential, the decoherence stems
from the relaxation processes, which may be either "resonant" (at relatively
long times) or "nonresonant" (at short times), giving rise to qualitatively
different temporal evolution of the electron state. The results obtained are
discussed in the context of quantum information processing based on the quantum
bits encoded in electron charge degrees of freedom.Comment: 20 pages, no figure
On the connection between the number of nodal domains on quantum graphs and the stability of graph partitions
Courant theorem provides an upper bound for the number of nodal domains of
eigenfunctions of a wide class of Laplacian-type operators. In particular, it
holds for generic eigenfunctions of quantum graph. The theorem stipulates that,
after ordering the eigenvalues as a non decreasing sequence, the number of
nodal domains of the -th eigenfunction satisfies . Here,
we provide a new interpretation for the Courant nodal deficiency in the case of quantum graphs. It equals the Morse index --- at a
critical point --- of an energy functional on a suitably defined space of graph
partitions. Thus, the nodal deficiency assumes a previously unknown and
profound meaning --- it is the number of unstable directions in the vicinity of
the critical point corresponding to the -th eigenfunction. To demonstrate
this connection, the space of graph partitions and the energy functional are
defined and the corresponding critical partitions are studied in detail.Comment: 22 pages, 6 figure
The general purpose analog computer and computable analysis are two equivalent paradigms of analog computation
In this paper we revisit one of the rst models of analog
computation, Shannon's General Purpose Analog Computer (GPAC).
The GPAC has often been argued to be weaker than computable analysis.
As main contribution, we show that if we change the notion of GPACcomputability
in a natural way, we compute exactly all real computable
functions (in the sense of computable analysis). Moreover, since GPACs
are equivalent to systems of polynomial di erential equations then we
show that all real computable functions can be de ned by such models
Against all odds? Forming the planet of the HD196885 binary
HD196885Ab is the most "extreme" planet-in-a-binary discovered to date, whose
orbit places it at the limit for orbital stability. The presence of a planet in
such a highly perturbed region poses a clear challenge to planet-formation
scenarios. We investigate this issue by focusing on the planet-formation stage
that is arguably the most sensitive to binary perturbations: the mutual
accretion of kilometre-sized planetesimals. To this effect we numerically
estimate the impact velocities amongst a population of circumprimary
planetesimals. We find that most of the circumprimary disc is strongly hostile
to planetesimal accretion, especially the region around 2.6AU (the planet's
location) where binary perturbations induce planetesimal-shattering of
more than 1km/s. Possible solutions to the paradox of having a planet in such
accretion-hostile regions are 1) that initial planetesimals were very big, at
least 250km, 2) that the binary had an initial orbit at least twice the present
one, and was later compacted due to early stellar encounters, 3) that
planetesimals did not grow by mutual impacts but by sweeping of dust (the
"snowball" growth mode identified by Xie et al., 2010b), or 4) that HD196885Ab
was formed not by core-accretion but by the concurent disc instability
mechanism. All of these 4 scenarios remain however highly conjectural.Comment: accepted for publication by Celestial Mechanics and Dynamical
Astronomy (Special issue on EXOPLANETS
Quantitative Treatment of Decoherence
We outline different approaches to define and quantify decoherence. We argue
that a measure based on a properly defined norm of deviation of the density
matrix is appropriate for quantifying decoherence in quantum registers. For a
semiconductor double quantum dot qubit, evaluation of this measure is reviewed.
For a general class of decoherence processes, including those occurring in
semiconductor qubits, we argue that this measure is additive: It scales
linearly with the number of qubits.Comment: Revised version, 26 pages, in LaTeX, 3 EPS figure
Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector
A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results
Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC
Measurements of inclusive jet suppression in heavy ion collisions at the LHC
provide direct sensitivity to the physics of jet quenching. In a sample of
lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated
luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with
a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the
transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the
anti-kt algorithm with values for the distance parameter that determines the
nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of
the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp.
Jet production is found to be suppressed by approximately a factor of two in
the 10% most central collisions relative to peripheral collisions. Rcp varies
smoothly with centrality as characterized by the number of participating
nucleons. The observed suppression is only weakly dependent on jet radius and
transverse momentum. These results provide the first direct measurement of
inclusive jet suppression in heavy ion collisions and complement previous
measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables,
submitted to Physics Letters B. All figures including auxiliary figures are
available at
http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02
- …