65 research outputs found
Proving the power of postselection
It is a widely believed, though unproven, conjecture that the capability of
postselection increases the language recognition power of both probabilistic
and quantum polynomial-time computers. It is also unknown whether
polynomial-time quantum machines with postselection are more powerful than
their probabilistic counterparts with the same resource restrictions. We
approach these problems by imposing additional constraints on the resources to
be used by the computer, and are able to prove for the first time that
postselection does augment the computational power of both classical and
quantum computers, and that quantum does outperform probabilistic in this
context, under simultaneous time and space bounds in a certain range. We also
look at postselected versions of space-bounded classes, as well as those
corresponding to error-free and one-sided error recognition, and provide
classical characterizations. It is shown that would equal
if the randomized machines had the postselection capability.Comment: 26 pages. This is a heavily improved version of arXiv:1102.066
Classical and quantum Merlin-Arthur automata
We introduce Merlin-Arthur (MA) automata as Merlin provides a single
certificate and it is scanned by Arthur before reading the input. We define
Merlin-Arthur deterministic, probabilistic, and quantum finite state automata
(resp., MA-DFAs, MA-PFAs, MA-QFAs) and postselecting MA-PFAs and MA-QFAs
(resp., MA-PostPFA and MA-PostQFA). We obtain several results using different
certificate lengths.
We show that MA-DFAs use constant length certificates, and they are
equivalent to multi-entry DFAs. Thus, they recognize all and only regular
languages but can be exponential and polynomial state efficient over binary and
unary languages, respectively. With sublinear length certificates, MA-PFAs can
recognize several nonstochastic unary languages with cutpoint 1/2. With linear
length certificates, MA-PostPFAs recognize the same nonstochastic unary
languages with bounded error. With arbitrarily long certificates, bounded-error
MA-PostPFAs verify every unary decidable language. With sublinear length
certificates, bounded-error MA-PostQFAs verify several nonstochastic unary
languages. With linear length certificates, they can verify every unary
language and some NP-complete binary languages. With exponential length
certificates, they can verify every binary language.Comment: 14 page
Randomness, information encoding, and shape replication in various models of DNA-inspired self-assembly
Self-assembly is the process by which simple, unorganized components autonomously combine to form larger, more complex structures. Researchers are turning to self-assembly technology for the design of ever smaller, more complex, and precise nanoscale devices, and as an emerging fundamental tool for nanotechnology.
We introduce the robust random number generation problem, the problem of encoding a target string of bits in the form of a bit string pad, and the problem of shape replication in various models of tile-based self-assembly. Also included are preliminary results in each of these directions with discussion of possible future work directions
Military Space Mission Design and Analysis in a Multi-Body Environment: An Investigation of High-Altitude Orbits as Alternative Transfer Paths, Parking Orbits for Reconstitution, and Unconventional Mission Orbits
High-altitude satellite trajectories are analyzed in the Earth-Moon circular restricted three-body problem. The equations of motion for this dynamical model possess no known closed-form analytical solution; therefore, numerical methods are employed. To gain insight into the dynamics of high-altitude trajectories in this multi-body dynamical environment, periapsis Poincare\u27 maps are generated at particular values of the Jacobi Constant. These maps are employed as visual aids to generate initial guesses for orbital transfers and to determine the predictability of the long term behavior of a spacecraft\u27s trajectory. Results of the current investigation demonstrate that high-altitude transfers may be performed for comparable, and in some cases less, V than conventional transfers. Additionally, transfers are found that are more timely than a launch-on-demand capability that requires 30 days lead time. The ability of satellites in such orbits to provide remote sensing coverage of the surface of the Earth is also assessed and found to be low relative to that of a satellite at geostationary altitude (35,786 km); however, intervals of high performance exist. The current investigation demonstrates not only the potential utility of high-altitude satellite trajectories for military applications but also an effective implementation of methods from dynamical systems theory
Recommended from our members
Data-Driven Programming Abstractions and Optimization for Multi-Core Platforms
Multi-core platforms have spread to all corners of the computing industry, and trends in design and power indicate that the shift to multi-core will become even wider-spread in the future. As the number of cores on a chip rises, the complexity of memory systems and on-chip interconnects increases drastically. The programmer inherits this complexity in the form of new responsibilities for task decomposition, synchronization, and data movement within an application, which hitherto have been concealed by complex processing pipelines or deemed unimportant since tasks were largely executed sequentially. To some extent, the need for explicit parallel programming is inevitable, due to limits in the instruction-level parallelism that can be automatically extracted from a program. However, these challenges create a great opportunity for the development of new programming abstractions which hide the low-level architectural complexity while exposing intuitive high-level mechanisms for expressing parallelism. Many models of parallel programming fall into the category of data-centric models, where the structure of an application depends on the role of data and communication in the relationships between tasks. The utilization of the inter-core communication networks and effective scaling to large data sets are decidedly important in designing efficient implementations of parallel applications. The questions of how many low-level architectural details should be exposed to the programmer, and how much parallelism in an application a programmer should expose to the compiler remain open-ended, with different answers depending on the architecture and the application in question. I propose that the key to unlocking the capabilities of multi-core platforms is the development of abstractions and optimizations which match the patterns of data movement in applications with the inter-core communication capabilities of the platforms. After a comparative analysis that confirms and stresses the importance of finding a good match between the programming abstraction, the application, and the architecture, this dissertation proposes two techniques that showcase the power of leveraging data dependency patterns in parallel performance optimizations. Flexible Filters dynamically balance load in stream programs by creating flexibility in the runtime data flow through the addition of redundant stream filters. This technique combines a static mapping with dynamic flow control to achieve light-weight, distributed and scalable throughput optimization. The properties of stream communication, i.e., FIFO pipes, enable flexible filters by exposing the backpressure dependencies between tasks. Next, I present Huckleberry, a novel recursive programming abstraction developed in order to allow programmers to expose data locality in divide-and-conquer algorithms at a high level of abstraction. Huckleberry automatically converts sequential recursive functions with explicit data partitioning into parallel implementations that can be ported across changes in the underlying architecture including the number of cores and the amount of on-chip memory. I then present a performance model for multi-core applications which provides an efficient means to evaluate the trade-offs between the computational and communication requirements of applications together with the hardware resources of a target multi-core architecture. The model encompasses all data-driven abstractions that can be reduced to a task graph representation and is extensible to performance techniques such as Flexible Filters that alter an application's original task graph. Flexible Filters and Huckleberry address the challenges of parallel programming on multi-core architectures by taking advantage of properties specific to the stream and recursive paradigms, and the performance model creates a unifying framework based on the communication between tasks in parallel applications. Combined, these contributions demonstrate that specialization with respect to communication patterns enhances the ability of parallel programming abstractions and optimizations to harvest the power of multi-core platforms
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership
Frontiers of Membrane Computing: Open Problems and Research Topics
This is a list of open problems and research topics collected after the Twelfth
Conference on Membrane Computing, CMC 2012 (Fontainebleau, France (23 - 26 August
2011), meant initially to be a working material for Tenth Brainstorming Week on
Membrane Computing, Sevilla, Spain (January 30 - February 3, 2012). The result was
circulated in several versions before the brainstorming and then modified according to
the discussions held in Sevilla and according to the progresses made during the meeting.
In the present form, the list gives an image about key research directions currently active
in membrane computing
- …