76 research outputs found
Diffusion-induced vortex filament instability in 3-dimensional excitable media
We studied the stability of linear vortex filaments in 3-dimensional (3D)
excitable media, using both analytical and numerical methods. We found an
intrinsic 3D instability of vortex filaments that is diffusion-induced, and is
due to the slower diffusion of the inhibitor. This instability can result
either in a single helical filament or in chaotic scroll breakup, depending on
the specific kinetic model. When the 2-dimensional dynamics were in the chaotic
regime, filament instability occurred via on-off intermittency, a failure of
chaos synchronization in the third dimension.Comment: 5 pages, 5 figures, to appear in PRL (September, 1999
The Turing test as interactive proof
In 1950, Alan Turing proposed his eponymous test based on indistinguishability of verbal behavior as a replacement for the question "Can machines think?" Since then, two mutually contradictory but well-founded attitudes towards the Turing Test have arisen in the philosophical literature. On the one hand is the attitude that has become philosophical conventional wisdom, viz., that the Turing Test is hopelessly flawed as a sufficient condition for intelligence, while on the other hand is the overwhelming sense that were a machine to pass a real live full-fledged Turing Test, it would be a sign of nothing but our orneriness to deny it the attribution of intelligence. The arguments against the sufficiency of the Turing Test for determining intelligence rely on showing that some extra conditions are logically necessary for intelligence beyond the behavioral properties exhibited by an agent under a Turing Test. Therefore, it cannot follow logically from passing a Turing Test that the agent is intelligent. I argue that these extra conditions can be revealed by the Turing Test, so long as we allow a very slight weakening of the criterion from one of logical proof to one of statistical proof under weak realizability assumptions. The argument depends on the notion of interactive proof developed in theoretical computer science, along with some simple physical facts that constrain the information capacity of agents. Crucially, the weakening is so slight as to make no conceivable difference from a practical standpoint. Thus, the Gordian knot between the two opposing views of the sufficiency of the Turing Test can be cut.Engineering and Applied Science
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
The Generic Model of Computation
Over the past two decades, Yuri Gurevich and his colleagues have formulated
axiomatic foundations for the notion of algorithm, be it classical,
interactive, or parallel, and formalized them in the new generic framework of
abstract state machines. This approach has recently been extended to suggest a
formalization of the notion of effective computation over arbitrary countable
domains. The central notions are summarized herein.Comment: In Proceedings DCM 2011, arXiv:1207.682
- …
