23,554 research outputs found
Quantum Logic and the Histories Approach to Quantum Theory
An extended analysis is made of the Gell-Mann and Hartle axioms for a
generalised `histories' approach to quantum theory. Emphasis is placed on
finding equivalents of the lattice structure that is employed in standard
quantum logic. Particular attention is given to `quasi-temporal' theories in
which the notion of time-evolution is less rigid than in conventional
Hamiltonian physics; theories of this type are expected to arise naturally in
the context of quantum gravity and quantum field theory in a curved space-time.
The quasi-temporal structure is coded in a partial semi-group of `temporal
supports' that underpins the lattice of history propositions. Non-trivial
examples include quantum field theory on a non globally-hyperbolic spacetime,
and a simple cobordism approach to a theory of quantum topology.
It is shown how the set of history propositions in standard quantum theory
can be realised in such a way that each history proposition is represented by a
genuine projection operator. This provides valuable insight into the possible
lattice structure in general history theories, and also provides a number of
potential models for theories of this type.Comment: TP/92-93/39 36 pages + one page of diagrams (I could email Apple
laser printer postscript file for anyone who is especially keen
Extreme value analysis for the sample autocovariance matrices of heavy-tailed multivariate time series
We provide some asymptotic theory for the largest eigenvalues of a sample
covariance matrix of a p-dimensional time series where the dimension p = p_n
converges to infinity when the sample size n increases. We give a short
overview of the literature on the topic both in the light- and heavy-tailed
cases when the data have finite (infinite) fourth moment, respectively. Our
main focus is on the heavytailed case. In this case, one has a theory for the
point process of the normalized eigenvalues of the sample covariance matrix in
the iid case but also when rows and columns of the data are linearly dependent.
We provide limit results for the weak convergence of these point processes to
Poisson or cluster Poisson processes. Based on this convergence we can also
derive the limit laws of various function als of the ordered eigenvalues such
as the joint convergence of a finite number of the largest order statistics,
the joint limit law of the largest eigenvalue and the trace, limit laws for
successive ratios of ordered eigenvalues, etc. We also develop some limit
theory for the singular values of the sample autocovariance matrices and their
sums of squares. The theory is illustrated for simulated data and for the
components of the S&P 500 stock index.Comment: in Extremes; Statistical Theory and Applications in Science,
Engineering and Economics; ISSN 1386-1999; (2016
Kinematic Diffraction from a Mathematical Viewpoint
Mathematical diffraction theory is concerned with the analysis of the
diffraction image of a given structure and the corresponding inverse problem of
structure determination. In recent years, the understanding of systems with
continuous and mixed spectra has improved considerably. Simultaneously, their
relevance has grown in practice as well. In this context, the phenomenon of
homometry shows various unexpected new facets. This is particularly so for
systems with stochastic components. After the introduction to the mathematical
tools, we briefly discuss pure point spectra, based on the Poisson summation
formula for lattice Dirac combs. This provides an elegant approach to the
diffraction formulas of infinite crystals and quasicrystals. We continue by
considering classic deterministic examples with singular or absolutely
continuous diffraction spectra. In particular, we recall an isospectral family
of structures with continuously varying entropy. We close with a summary of
more recent results on the diffraction of dynamical systems of algebraic or
stochastic origin.Comment: 30 pages, invited revie
Teaching programming with computational and informational thinking
Computers are the dominant technology of the early 21st century: pretty well all aspects of economic, social and personal life are now unthinkable without them. In turn, computer hardware is controlled by software, that is, codes written in programming languages. Programming, the construction of software, is thus a fundamental activity, in which millions of people are engaged worldwide, and the teaching of programming is long established in international secondary and higher education. Yet, going on 70 years after the first computers were built, there is no well-established pedagogy for teaching programming.
There has certainly been no shortage of approaches. However, these have often been driven by fashion, an enthusiastic amateurism or a wish to follow best industrial practice, which, while appropriate for mature professionals, is poorly suited to novice programmers. Much of the difficulty lies in the very close relationship between problem solving and programming. Once a problem is well characterised it is relatively straightforward to realise a solution in software. However, teaching problem solving is, if anything, less well understood than teaching programming.
Problem solving seems to be a creative, holistic, dialectical, multi-dimensional, iterative process. While there are well established techniques for analysing problems, arbitrary problems cannot be solved by rote, by mechanically applying techniques in some prescribed linear order. Furthermore, historically, approaches to teaching programming have failed to account for this complexity in problem solving, focusing strongly on programming itself and, if at all, only partially and superficially exploring problem solving.
Recently, an integrated approach to problem solving and programming called Computational Thinking (CT) (Wing, 2006) has gained considerable currency. CT has the enormous advantage over prior approaches of strongly emphasising problem solving and of making explicit core techniques. Nonetheless, there is still a tendency to view CT as prescriptive rather than creative, engendering scholastic arguments about the nature and status of CT techniques. Programming at heart is concerned with processing information but many accounts of CT emphasise processing over information rather than seeing then as intimately related.
In this paper, while acknowledging and building on the strengths of CT, I argue that understanding the form and structure of information should be primary in any pedagogy of programming
Approximations from Anywhere and General Rough Sets
Not all approximations arise from information systems. The problem of fitting
approximations, subjected to some rules (and related data), to information
systems in a rough scheme of things is known as the \emph{inverse problem}. The
inverse problem is more general than the duality (or abstract representation)
problems and was introduced by the present author in her earlier papers. From
the practical perspective, a few (as opposed to one) theoretical frameworks may
be suitable for formulating the problem itself. \emph{Granular operator spaces}
have been recently introduced and investigated by the present author in her
recent work in the context of antichain based and dialectical semantics for
general rough sets. The nature of the inverse problem is examined from
number-theoretic and combinatorial perspectives in a higher order variant of
granular operator spaces and some necessary conditions are proved. The results
and the novel approach would be useful in a number of unsupervised and semi
supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings,
Springe
- âŠ