4,012 research outputs found
Information-Based Physics: An Observer-Centric Foundation
It is generally believed that physical laws, reflecting an inherent order in
the universe, are ordained by nature. However, in modern physics the observer
plays a central role raising questions about how an observer-centric physics
can result in laws apparently worthy of a universal nature-centric physics.
Over the last decade, we have found that the consistent apt quantification of
algebraic and order-theoretic structures results in calculi that possess
constraint equations taking the form of what are often considered to be
physical laws. I review recent derivations of the formal relations among
relevant variables central to special relativity, probability theory and
quantum mechanics in this context by considering a problem where two observers
form consistent descriptions of and make optimal inferences about a free
particle that simply influences them. I show that this approach to describing
such a particle based only on available information leads to the mathematics of
relativistic quantum mechanics as well as a description of a free particle that
reproduces many of the basic properties of a fermion. The result is an approach
to foundational physics where laws derive from both consistent descriptions and
optimal information-based inferences made by embedded observers.Comment: To be published in Contemporary Physics. The manuscript consists of
43 pages and 9 Figure
On the and as Bound States and Approximate Nambu-Goldstone Bosons
We reconsider the two different facets of and mesons as
bound states and approximate Nambu-Goldstone bosons. We address several topics,
including masses, mass splittings between and and between and
, meson wavefunctions, charge radii, and the wavefunction overlap.Comment: 15 pages, late
Run Generation Revisited: What Goes Up May or May Not Come Down
In this paper, we revisit the classic problem of run generation. Run
generation is the first phase of external-memory sorting, where the objective
is to scan through the data, reorder elements using a small buffer of size M ,
and output runs (contiguously sorted chunks of elements) that are as long as
possible.
We develop algorithms for minimizing the total number of runs (or
equivalently, maximizing the average run length) when the runs are allowed to
be sorted or reverse sorted. We study the problem in the online setting, both
with and without resource augmentation, and in the offline setting.
(1) We analyze alternating-up-down replacement selection (runs alternate
between sorted and reverse sorted), which was studied by Knuth as far back as
1963. We show that this simple policy is asymptotically optimal. Specifically,
we show that alternating-up-down replacement selection is 2-competitive and no
deterministic online algorithm can perform better.
(2) We give online algorithms having smaller competitive ratios with resource
augmentation. Specifically, we exhibit a deterministic algorithm that, when
given a buffer of size 4M , is able to match or beat any optimal algorithm
having a buffer of size M . Furthermore, we present a randomized online
algorithm which is 7/4-competitive when given a buffer twice that of the
optimal.
(3) We demonstrate that performance can also be improved with a small amount
of foresight. We give an algorithm, which is 3/2-competitive, with
foreknowledge of the next 3M elements of the input stream. For the extreme case
where all future elements are known, we design a PTAS for computing the optimal
strategy a run generation algorithm must follow.
(4) Finally, we present algorithms tailored for nearly sorted inputs which
are guaranteed to have optimal solutions with sufficiently long runs
Systemtherapie des kolorektalen Karzinoms
Zusammenfassung: Die medikamentöse Behandlung des kolorektalen Karzinoms hat in den letzten 10Jahren eindrückliche Fortschritte gemacht. Neben dem altbewährten 5-Fluorouracil stehen heute neue Zytostatika zur Verfügung wie Irinotecan und Oxaliplatin. Monoklonale Antikörper wie Bevacizumab und Cetuximab haben erfolgreich Eingang in aktuelle Therapiestrategien gefunden. Auf der Basis randomisierter klinischer Studien lassen sich heute rationale Therapiestrategien formulieren, wie in diesem Beitrag dargestell
An O(M(n) log n) algorithm for the Jacobi symbol
The best known algorithm to compute the Jacobi symbol of two n-bit integers
runs in time O(M(n) log n), using Sch\"onhage's fast continued fraction
algorithm combined with an identity due to Gauss. We give a different O(M(n)
log n) algorithm based on the binary recursive gcd algorithm of Stehl\'e and
Zimmermann. Our implementation - which to our knowledge is the first to run in
time O(M(n) log n) - is faster than GMP's quadratic implementation for inputs
larger than about 10000 decimal digits.Comment: Submitted to ANTS IX (Nancy, July 2010
Map equation for link community
Community structure exists in many real-world networks and has been reported
being related to several functional properties of the networks. The
conventional approach was partitioning nodes into communities, while some
recent studies start partitioning links instead of nodes to find overlapping
communities of nodes efficiently. We extended the map equation method, which
was originally developed for node communities, to find link communities in
networks. This method is tested on various kinds of networks and compared with
the metadata of the networks, and the results show that our method can identify
the overlapping role of nodes effectively. The advantage of this method is that
the node community scheme and link community scheme can be compared
quantitatively by measuring the unknown information left in the networks
besides the community structure. It can be used to decide quantitatively
whether or not the link community scheme should be used instead of the node
community scheme. Furthermore, this method can be easily extended to the
directed and weighted networks since it is based on the random walk.Comment: 9 pages,5 figure
Noncooperative algorithms in self-assembly
We show the first non-trivial positive algorithmic results (i.e. programs
whose output is larger than their size), in a model of self-assembly that has
so far resisted many attempts of formal analysis or programming: the planar
non-cooperative variant of Winfree's abstract Tile Assembly Model.
This model has been the center of several open problems and conjectures in
the last fifteen years, and the first fully general results on its
computational power were only proven recently (SODA 2014). These results, as
well as ours, exemplify the intricate connections between computation and
geometry that can occur in self-assembly.
In this model, tiles can stick to an existing assembly as soon as one of
their sides matches the existing assembly. This feature contrasts with the
general cooperative model, where it can be required that tiles match on
\emph{several} of their sides in order to bind.
In order to describe our algorithms, we also introduce a generalization of
regular expressions called Baggins expressions. Finally, we compare this model
to other automata-theoretic models.Comment: A few bug fixes and typo correction
Revealing Relationships among Relevant Climate Variables with Information Theory
A primary objective of the NASA Earth-Sun Exploration Technology Office is to
understand the observed Earth climate variability, thus enabling the
determination and prediction of the climate's response to both natural and
human-induced forcing. We are currently developing a suite of computational
tools that will allow researchers to calculate, from data, a variety of
information-theoretic quantities such as mutual information, which can be used
to identify relationships among climate variables, and transfer entropy, which
indicates the possibility of causal interactions. Our tools estimate these
quantities along with their associated error bars, the latter of which is
critical for describing the degree of uncertainty in the estimates. This work
is based upon optimal binning techniques that we have developed for
piecewise-constant, histogram-style models of the underlying density functions.
Two useful side benefits have already been discovered. The first allows a
researcher to determine whether there exist sufficient data to estimate the
underlying probability density. The second permits one to determine an
acceptable degree of round-off when compressing data for efficient transfer and
storage. We also demonstrate how mutual information and transfer entropy can be
applied so as to allow researchers not only to identify relations among climate
variables, but also to characterize and quantify their possible causal
interactions.Comment: 14 pages, 5 figures, Proceedings of the Earth-Sun System Technology
Conference (ESTC 2005), Adelphi, M
Guessing probability distributions from small samples
We propose a new method for the calculation of the statistical properties, as
e.g. the entropy, of unknown generators of symbolic sequences. The probability
distribution of the elements of a population can be approximated by
the frequencies of a sample provided the sample is long enough so that
each element occurs many times. Our method yields an approximation if this
precondition does not hold. For a given we recalculate the Zipf--ordered
probability distribution by optimization of the parameters of a guessed
distribution. We demonstrate that our method yields reliable results.Comment: 10 pages, uuencoded compressed PostScrip
- …
