65,560 research outputs found
A contextual extension of Spekkens' toy model
Quantum systems show contextuality. More precisely, it is impossible to
reproduce the quantum-mechanical predictions using a non-contextual realist
model, i.e., a model where the outcome of one measurement is independent of the
choice of compatible measurements performed in the measurement context. There
has been several attempts to quantify the amount of contextuality for specific
quantum systems, for example, in the number of rays needed in a KS proof, or
the number of terms in certain inequalities, or in the violation, noise
sensitivity, and other measures. This paper is about another approach: to use a
simple contextual model that reproduces the quantum-mechanical contextual
behaviour, but not necessarily all quantum predictions. The amount of
contextuality can then be quantified in terms of additional resources needed as
compared with a similar model without contextuality. In this case the
contextual model needs to keep track of the context used, so the appropriate
measure would be memory. Another way to view this is as a memory requirement to
be able to reproduce quantum contextuality in a realist model. The model we
will use can be viewed as an extension of Spekkens' toy model [Phys. Rev. A 75,
032110 (2007)], and the relation is studied in some detail. To reproduce the
quantum predictions for the Peres-Mermin square, the memory requirement is more
than one bit in addition to the memory used for the individual outcomes in the
corresponding noncontextual model.Comment: 10 page
Modeling the Singlet State with Local Variables
A local-variable model yielding the statistics from the singlet state is
presented for the case of inefficient detectors and/or lowered visibility. It
has independent errors and the highest efficiency at perfect visibility is
77.80%, while the highest visibility at perfect detector-efficiency is 63.66%.
Thus, the model cannot be refuted by measurements made to date.Comment: 15 pages, 13 figure
Anomalously strong pinning of the filling factor nu=2 in epitaxial graphene
We explore the robust quantization of the Hall resistance in epitaxial
graphene grown on Si-terminated SiC. Uniquely to this system, the dominance of
quantum over classical capacitance in the charge transfer between the substrate
and graphene is such that Landau levels (in particular, the one at exactly zero
energy) remain completely filled over an extraordinarily broad range of
magnetic fields. One important implication of this pinning of the filling
factor is that the system can sustain a very high nondissipative current. This
makes epitaxial graphene ideally suited for quantum resistance metrology, and
we have achieved a precision of 3 parts in 10^10 in the Hall resistance
quantization measurements
Spectrum of pi electrons in bilayer graphene nanoribbons and nanotubes: an analytical approach
We present an analytical description of pi electrons of a finite size bilayer
graphene within a framework of the tight-binding model. The bilayered
structures considered here are characterized by a rectangular geometry and have
a finite size in one or both directions with armchair- and zigzag-shaped edges.
We provide an exact analytical description of the spectrum of pi electrons in
the zigzag and armchair bilayer graphene nanoribbons and nanotubes. We analyze
the dispersion relations, the density of states, and the conductance
quantization.Comment: 8 figure
Shareholder Voting and the Chicago School: Now Is the Winter of Our Discontent
We have simulated the effect of different parameters in location-aware information sharing policies for crowd-based information exchange systems. The purpose of this simulation was to find out which parameters improved the upload time, battery life and success rate for nodes trying to upload a large file under bad conditions. To test the effect of these parameters on a larger scale, we simulated an area where a large number of nodes were moving around. Our test results showed that nodes greatly improved their battery life and the upload time by limiting the number of nodes they send data to, rather than sharing data with all nodes within reach. However, sending the oldest collected data performed very bad in regards of battery life time and had a relatively high amount of nodes that did not manage to upload their file. We concluded that nodes should not share their data with all available nodes at all times, and be restrictive in the amount of data they share with other nodes to conserve battery
Editor’s Note
Denna studie ämnar att beskriva hur kunskapsintensiva företag hanterar den kunskap som konsulter bidrar med vid introduktion, i det dagliga arbetet och hur de möjliggör för överföring av kunskapen när konsulten lämnar företaget. En fallstudie på två stora internationella företag har genomförts och empiri samlades in genom semistrukturerade intervjuer. Resultatet visar att företagen har likartade introduktioner med fokus på organisatoriska och individuella kunskaper. Vilket syfte konsulten har i företaget påverkar kunskapsintegreringen inom arbetet. Även organisationsstrukturen och konsultens placering inom företaget har effekter för kunskapsintegreringen. Konsulten bidrar med flexibilitet, vilket innebär att företaget bör sträva efter att genomföra integrerande praktiker under arbetets gång, och inte fokusera på överlämningen enbart när konsulten lämnar företaget
EC Maritime Transport Policy and Regulation
When designing robust controllers, H-infinity synthesis is a common tool touse. The controllers that result from these algorithms are typically of very high order, which complicates implementation. However, if a constraint on the maximum order of the controller is set, that is lower than the order of the (augmented) system, the problem becomes nonconvex and it is relatively hard to solve. These problems become very complex, even when the order of the system is low. The approach used in this work is based on formulating the constraint onthe maximum order of the controller as a polynomial (or rational) equation.This equality constraint is added to the optimization problem of minimizingan upper bound on the H-innity norm of the closed loop system subjectto linear matrix inequality (LMI) constraints. The problem is then solvedby reformulating it as a partially augmented Lagrangian problem where theequality constraint is put into the objective function, but where the LMIsare kept as constraints. The proposed method is evaluated together with two well-known methodsfrom the literature. The results indicate that the proposed method hascomparable performance in most cases, especially if the synthesized con-troller has many parameters, which is the case if the system to be controlledhas many input and output signals
A Parallel Riccati Factorization Algorithm with Applications to Model Predictive Control
Model Predictive Control (MPC) is increasing in popularity in industry as
more efficient algorithms for solving the related optimization problem are
developed. The main computational bottle-neck in on-line MPC is often the
computation of the search step direction, i.e. the Newton step, which is often
done using generic sparsity exploiting algorithms or Riccati recursions.
However, as parallel hardware is becoming increasingly popular the demand for
efficient parallel algorithms for solving the Newton step is increasing. In
this paper a tailored, non-iterative parallel algorithm for computing the
Riccati factorization is presented. The algorithm exploits the special
structure in the MPC problem, and when sufficiently many processing units are
available, the complexity of the algorithm scales logarithmically in the
prediction horizon. Computing the Newton step is the main computational
bottle-neck in many MPC algorithms and the algorithm can significantly reduce
the computation cost for popular state-of-the-art MPC algorithms
The Law of Sports. By John Weistart and Cym Lowell.
This report gives an overview of methods and approaches applicable to the UAV flight path and sensor aiming planning problem for search and track of multiple ground targets. The main focus of the survey is on stochastic optimal control, dynamic programming, partially observable Markov decision processes, sensor scheduling, bearings-only tracking, search and exploration. References to standard texts, as well as more recent research results, are given
Foreword
This paper offers a critical look at how energy security-, food and agriculture-, and climate change-oriented international organizations frame biomass energy production in developing countries, in particular, ethanol production in Brazil. Using the world-economy system as a theoretical lens, the paper raises a concern as to whether the way these global institutions frame bioenergy's role in developing regions manifests energy and ecological inequalities between the core and the periphery, as well as creates internal contradictions that perpetuate unequal exchange embedded in the system. Simultaneously, these organizations frame Brazil as a semi-peripheral state that, while successful in finding a niche concurring with the core's demand for cheap energy and cost-effective decarbonization strategies, is not necessarily a suitable role model for the periphery's socio–economic development.Original Publication: Magdalena Kuchler, Unravelling the argument for bioenergy production in developing countries: A world-economy perspective, 2010, Ecological Economics, (69), 6, 1336-1343. http://dx.doi.org/10.1016/j.ecolecon.2010.01.011 Copyright: Elsevier Science B.V., Amsterdam. http://www.elsevier.com/The Politics of Bioenerg
- …