5,999 research outputs found
The RAM equivalent of P vs. RP
One of the fundamental open questions in computational complexity is whether
the class of problems solvable by use of stochasticity under the Random
Polynomial time (RP) model is larger than the class of those solvable in
deterministic polynomial time (P). However, this question is only open for
Turing Machines, not for Random Access Machines (RAMs).
Simon (1981) was able to show that for a sufficiently equipped Random Access
Machine, the ability to switch states nondeterministically does not entail any
computational advantage. However, in the same paper, Simon describes a
different (and arguably more natural) scenario for stochasticity under the RAM
model. According to Simon's proposal, instead of receiving a new random bit at
each execution step, the RAM program is able to execute the pseudofunction
, which returns a uniformly distributed random integer in the
range . Whether the ability to allot a random integer in this fashion is
more powerful than the ability to allot a random bit remained an open question
for the last 30 years.
In this paper, we close Simon's open problem, by fully characterising the
class of languages recognisable in polynomial time by each of the RAMs
regarding which the question was posed. We show that for some of these,
stochasticity entails no advantage, but, more interestingly, we show that for
others it does.Comment: 23 page
A software system for laboratory experiments in image processing
Laboratory experiments for image processing courses are usually software implementations of processing algorithms, but students of image processing come from diverse backgrounds with widely differing software experience. To avoid learning overhead, the software system should be easy to learn and use, even for those with no exposure to mathematical programming languages or object-oriented programming. The class library for image processing (CLIP) supports users with knowledge of C, by providing three C++ types with small public interfaces, including natural and efficient operator overloading. CLIP programs are compact and fast. Experience in using the system in undergraduate and graduate teaching indicates that it supports subject matter learning with little distraction from language/system learning
Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip
Quantum phase estimation is a fundamental subroutine in many quantum
algorithms, including Shor's factorization algorithm and quantum simulation.
However, so far results have cast doubt on its practicability for near-term,
non-fault tolerant, quantum devices. Here we report experimental results
demonstrating that this intuition need not be true. We implement a recently
proposed adaptive Bayesian approach to quantum phase estimation and use it to
simulate molecular energies on a Silicon quantum photonic device. The approach
is verified to be well suited for pre-threshold quantum processors by
investigating its superior robustness to noise and decoherence compared to the
iterative phase estimation algorithm. This shows a promising route to unlock
the power of quantum phase estimation much sooner than previously believed
Consensus clustering and functional interpretation of gene-expression data
Microarray analysis using clustering algorithms can suffer from lack of inter-method consistency in assigning related gene-expression profiles to clusters. Obtaining a consensus set of clusters from a number of clustering methods should improve confidence in gene-expression analysis. Here we introduce consensus clustering, which provides such an advantage. When coupled with a statistically based gene functional analysis, our method allowed the identification of novel genes regulated by NFκB and the unfolded protein response in certain B-cell lymphomas
Recommended from our members
Parallel H.263 Encoder in Normal Coding Mode
A parallel H.263 video encoder, which utilises spatial para1 elism,
has been modelled using a multi-threaded program. Spatial
parallelism is a technique where an image is subdivided into equal
parts (as far as physically possible) and each part is proces!;ed by
a separate processor by computing motion and texture mding
with all processors cach acting on a different part of thc ]mag.
This method leads to a performance increase, which is roughly in
proportion to the number of parallel processors used
- …