1,112 research outputs found
Quantum Tetrahedra
We discuss in details the role of Wigner 6j symbol as the basic building
block unifying such different fields as state sum models for quantum geometry,
topological quantum field theory, statistical lattice models and quantum
computing. The apparent twofold nature of the 6j symbol displayed in quantum
field theory and quantum computing -a quantum tetrahedron and a computational
gate- is shown to merge together in a unified quantum-computational SU(2)-state
sum framework
Qualitative Analysis of Concurrent Mean-payoff Games
We consider concurrent games played by two-players on a finite-state graph,
where in every round the players simultaneously choose a move, and the current
state along with the joint moves determine the successor state. We study a
fundamental objective, namely, mean-payoff objective, where a reward is
associated to each transition, and the goal of player 1 is to maximize the
long-run average of the rewards, and the objective of player 2 is strictly the
opposite. The path constraint for player 1 could be qualitative, i.e., the
mean-payoff is the maximal reward, or arbitrarily close to it; or quantitative,
i.e., a given threshold between the minimal and maximal reward. We consider the
computation of the almost-sure (resp. positive) winning sets, where player 1
can ensure that the path constraint is satisfied with probability 1 (resp.
positive probability). Our main results for qualitative path constraints are as
follows: (1) we establish qualitative determinacy results that show that for
every state either player 1 has a strategy to ensure almost-sure (resp.
positive) winning against all player-2 strategies, or player 2 has a spoiling
strategy to falsify almost-sure (resp. positive) winning against all player-1
strategies; (2) we present optimal strategy complexity results that precisely
characterize the classes of strategies required for almost-sure and positive
winning for both players; and (3) we present quadratic time algorithms to
compute the almost-sure and the positive winning sets, matching the best known
bound of algorithms for much simpler problems (such as reachability
objectives). For quantitative constraints we show that a polynomial time
solution for the almost-sure or the positive winning set would imply a solution
to a long-standing open problem (the value problem for turn-based deterministic
mean-payoff games) that is not known to be solvable in polynomial time
Chern-Simons Forms in Gravitation Theories
The Chern-Simons (CS) form evolved from an obstruction in mathematics into an
important object in theoretical physics. In fact, the presence of CS terms in
physics is more common than one may think: they seem to play an important role
in high Tc superconductivity and in recently discovered topological insulators.
In classical physics, the minimal coupling in electromagnetism and to the
action for a mechanical system in Hamiltonian form are examples of CS
functionals. CS forms are also the natural generalization of the minimal
coupling between the electromagnetic field and a point charge when the source
is not point-like but an extended fundamental object, a membrane. They are
found in relation with anomalies in quantum field theories, and as Lagrangians
for gauge fields, including gravity and supergravity. A cursory review of the
role of CS forms in gravitation theories is presented at an introductory level.Comment: Author-created, un-copyedited version of an article published in CQG;
41 pages, no figure
Value Iteration for Long-run Average Reward in Markov Decision Processes
Markov decision processes (MDPs) are standard models for probabilistic
systems with non-deterministic behaviours. Long-run average rewards provide a
mathematically elegant formalism for expressing long term performance. Value
iteration (VI) is one of the simplest and most efficient algorithmic approaches
to MDPs with other properties, such as reachability objectives. Unfortunately,
a naive extension of VI does not work for MDPs with long-run average rewards,
as there is no known stopping criterion. In this work our contributions are
threefold. (1) We refute a conjecture related to stopping criteria for MDPs
with long-run average rewards. (2) We present two practical algorithms for MDPs
with long-run average rewards based on VI. First, we show that a combination of
applying VI locally for each maximal end-component (MEC) and VI for
reachability objectives can provide approximation guarantees. Second, extending
the above approach with a simulation-guided on-demand variant of VI, we present
an anytime algorithm that is able to deal with very large models. (3) Finally,
we present experimental results showing that our methods significantly
outperform the standard approaches on several benchmarks
« Going with Coase against Coase : The Dynamic Approach to the Internalization of External Effects »
The article develops R. H. Coaseâs insight that the level of transaction costs in the market determines the amount of externalities, thus providing arguments against government intervention. Contrary to Coase, however, we argue that the level of transaction costs cannot be considered as given, and that there is therefore a case for selective and innovative government intervention to reduce such transaction costs. Externalities are approached as intrinsically new and dynamic impacts, whose transaction costs diminish over time, a process that can be accelerated by appropriate government action. In contrast, internalization by means of public intervention through Pigouvian taxation is shown to be epistemologically untenable: if externalities had sufficient information content to allow governments to determine optimal tax levels, these same externalities would already have been fully internalized by the market. The final part of the article proposes two internalization strategies based on a dynamic re-interpretation of the Coasean approach. The first aims at developing feedback mechanisms between generators of externalities and those affected by them through media other than the market. The second seeks to reduce transaction costs in order to extend the domain in which markets can operate effectively by proposing codification strategies for the informational complexities characterizing externalities.External effects, incomplete information, environmental economics, transaction costs, codification, dynamic internalization
Elegance with substance
Subject: The education in mathematics, its failure and costs, and how to redesign this market. The political economy of mathematics education.
Method: We do not require statistics to show that mathematics education fails but can look at the math itself. Criticism on mathematics itself can only succeed if it results into better mathematics. Similarly for the didactics of mathematics. Proof is provided that the mathematics that is taught often is cumbersome and illogical. It is rather impossible to provide good didactics on what is inherently illogical.
Basic observations: We would presume that school mathematics would be clear and didactically effective. A closer look shows that it is cumbersome and illogical. (1) This is illustrated here with some twenty examples from a larger stock of potential topics. (2) It appears possible to formulate additional shopping lists for improvement on both content and didactic method. (3) Improvements appear possible with respect to mathematics itself, on logic, voting theory, trigonometry and calculus. The latter two improvements directly originate from a didactic approach and it is amazing that they have not been noted earlier by conventional mathematics. (4) What is called mathematics thus is not really mathematics. Pupils and students are psychologically tortured and withheld from proper mathematical insight and competence. Spatial sense and understanding, algebraic sense and competence, logical sense and the competence in reasoning, they all are hindered and obstructed. Mathematics forms a core element in education and destroys much of school life of pupils and students in their formative years.
Basic analysis: This situation arises not because it is only school math, where mathematics must be simpler of necessity, but it arises because of the failure of mathematicians to deliver. The failure can be traced to a deep rooted tradition and culture in mathematics. Didactics requires a mindset that is sensitive to empirical observation which is not what mathematicians are trained for. Psychology will play a role in the filtering out of those students who will later become mathematicians. Their tradition and culture conditions mathematicians to see what they are conditioned to see.
Higher order observations: When mathematicians deal with empirical issues then problems arise in general. The failure in education is only one example in a whole range. The stock market crash in 2008 was caused by many factors, including mismanagement by bank managers and failing regulation, but also by mathematicians and ârocket scientistsâ mistaking abstract models for reality (Mandelbrot & Taleb 2009). Another failure arises in the modelling of the economics of the environment where an influx of mathematical approaches causes too much emphasis on elegant form and easy notions of risk and insufficient attention to reality, statistics and real risk (Tinbergen & Hueting 1991). Improvements in mathematics itself appear possible in logic and voting theory, with consequences for civic discourse and democracy, where the inspiration for the improvement comes from realism (Colignatus 2007). Economics as a science suffers from bad math and the maltreatment of its students â and most likely this is also true for the other sciences. Professors and teachers of mathematics â or at least 99.9% of them â apparently cannot diagnose their collective failure themselves and apparently âblame the victimsâ for not understanding mathematics. The other scientific professions are advised to verify these points.
Higher order analysis: Application of economic theory helps to understand that the markets for education and ideas tend to be characterized by monopolistic competition and natural monopolies. Regulations are important. Apparently the industry of mathematics education currently is not adequately regulated. The regulation of financial markets is a hot topic nowadays but the persistent failure of mathematics education would rather be high on the list as well. It will be important to let the industry become more open to society. Without adjustment of regulations at the macro-level it is rather useless to try to improve mathematics education and didactics at the micro level. Mathematical tradition and culture creates a mindset, and mathematicians are like lemmings that are set to go into one direction. Trying to micro-manage change with some particular lemmings will not help in any way. An example layout is provided how the industry could be regulated.
Conundrum: Mathematicians might be the first to recognize the improvements in mathematics and didactics presented here. Mathematical tradition clearly is an improvement from alchemy and astrology. Most people will also tend to let the professors and teachers decide on whether these items are improvements indeed. It is tempting to conclude that the system then works: an improvement is proposed, it is recognized, and eventually will be implemented. This approach however takes a risk with respect to potential future changes. With the present failure and analysis on the cause we should rather be wary of that risk. We better regulate the industry of mathematics education in robust manner. The mathematical examples presented here can be understood in principle by anyone with a highschool level of mathematics. They are targetted to explain didactically to a large audience how big the failure in the education in mathematics actually is.
Advice: The economic consequences are huge. National parliaments are advised to do something about this, starting with an enquiry. Parents are advised to write their representative. The professional associations of mathematics and economics are advised to write their parliament in support of that enquiry
- âŠ