300 research outputs found
Zeno machines and hypercomputation
This paper reviews the Church-Turing Thesis (or rather, theses) with
reference to their origin and application and considers some models of
"hypercomputation", concentrating on perhaps the most straight-forward option:
Zeno machines (Turing machines with accelerating clock). The halting problem is
briefly discussed in a general context and the suggestion that it is an
inevitable companion of any reasonable computational model is emphasised. It is
hinted that claims to have "broken the Turing barrier" could be toned down and
that the important and well-founded role of Turing computability in the
mathematical sciences stands unchallenged.Comment: 11 pages. First submitted in December 2004, substantially revised in
July and in November 2005. To appear in Theoretical Computer Scienc
Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence
The paper serves as the first contribution towards the development of the
theory of efficiency: a unifying framework for the currently disjoint theories
of information, complexity, communication and computation. Realizing the
defining nature of the brute force approach in the fundamental concepts in all
of the above mentioned fields, the paper suggests using efficiency or
improvement over the brute force algorithm as a common unifying factor
necessary for the creation of a unified theory of information manipulation. By
defining such diverse terms as randomness, knowledge, intelligence and
computability in terms of a common denominator we are able to bring together
contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and
many others under a common umbrella of the efficiency theory
Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence
The paper serves as the first contribution towards the development of the theory of efficiency: a unifying framework for the currently disjoint theories of information, complexity, communication and computation. Realizing the defining nature of the brute force approach in the fundamental concepts in all of the above mentioned fields, the paper suggests using efficiency or improvement over the brute force algorithm as a common unifying factor necessary for the creation of a unified theory of information manipulation. By defining such diverse terms as randomness, knowledge, intelligence and computability in terms of a common denominator we are able to bring together contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and many others under a common umbrella of the efficiency theory. © Taru Publications
Construction of an NP Problem with an Exponential Lower Bound
In this paper we present a Hashed-Path Traveling Salesperson Problem (HPTSP),
a new type of problem which has the interesting property of having no
polynomial time solutions. Next we show that HPTSP is in the class NP by
demonstrating that local information about sub-routes is insufficient to
compute the complete value of each route. As a consequence, via Ladner's
theorem, we show that the class NPI is non-empty
Revisiting clustering as matrix factorisation on the Stiefel manifold
International audienceThis paper studies clustering for possibly high dimensional data (e.g. images, time series, gene expression data, and many other settings), and rephrase it as low rank matrix estimation in the PAC-Bayesian framework. Our approach leverages the well known Burer-Monteiro factorisation strategy from large scale optimisation, in the context of low rank estimation. Moreover, our Burer-Monteiro factors are shown to lie on a Stiefel manifold. We propose a new generalized Bayesian estimator for this problem and prove novel prediction bounds for clustering. We also devise a componentwise Langevin sampler on the Stiefel manifold to compute this estimator
Cryptanalysis of RSA: Integer Prime Factorization Using Genetic Algorithms
In recent years, researchers have been exploring alternative methods to solving Integer Prime Factorization, the decomposition of an integer into its prime factors. This has direct application to cryptanalysis of RSA, as one means of breaking such a cryptosystem requires factorization of a large number that is the product of two prime numbers. This paper applies three different genetic algorithms to solve this issue, utilizing mathematical knowledge concerning distribution of primes to improve the algorithms. The best of the three genetic algorithms has a chromosome that represents m in the equation prime = 6 m ± 1, and is able to factor a number of up to 22 decimal digits. This is a significantly larger number than the largest factored by comparable methods in earlier work. This leads to the conclusion that approaches such as genetic algorithms are a promising avenue of research into the problem of integer factorization
An Accuracy-Assured Privacy-Preserving Recommender System for Internet Commerce
Recommender systems, tool for predicting users' potential preferences by
computing history data and users' interests, show an increasing importance in
various Internet applications such as online shopping. As a well-known
recommendation method, neighbourhood-based collaborative filtering has
attracted considerable attention recently. The risk of revealing users' private
information during the process of filtering has attracted noticeable research
interests. Among the current solutions, the probabilistic techniques have shown
a powerful privacy preserving effect. When facing Nearest Neighbour attack,
all the existing methods provide no data utility guarantee, for the
introduction of global randomness. In this paper, to overcome the problem of
recommendation accuracy loss, we propose a novel approach, Partitioned
Probabilistic Neighbour Selection, to ensure a required prediction accuracy
while maintaining high security against NN attack. We define the sum of
neighbours' similarity as the accuracy metric alpha, the number of user
partitions, across which we select the neighbours, as the security metric
beta. We generalise the Nearest Neighbour attack to beta k Nearest
Neighbours attack. Differing from the existing approach that selects neighbours
across the entire candidate list randomly, our method selects neighbours from
each exclusive partition of size with a decreasing probability. Theoretical
and experimental analysis show that to provide an accuracy-assured
recommendation, our Partitioned Probabilistic Neighbour Selection method yields
a better trade-off between the recommendation accuracy and system security.Comment: replacement for the previous versio
Recommended from our members
Personal mobile grids with a honeybee inspired resource scheduler
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The overall aim of the thesis has been to introduce Personal Mobile Grids (PMGrids)
as a novel paradigm in grid computing that scales grid infrastructures to mobile devices and extends grid entities to individual personal users. In this thesis, architectural designs as well as simulation models for PM-Grids are developed.
The core of any grid system is its resource scheduler. However, virtually all current conventional grid schedulers do not address the non-clairvoyant scheduling problem, where job information is not available before the end of execution. Therefore, this thesis proposes a honeybee inspired resource scheduling heuristic for PM-Grids (HoPe) incorporating a radical approach to grid resource scheduling to tackle this problem. A detailed design and implementation of HoPe with a decentralised self-management and adaptive policy are initiated.
Among the other main contributions are a comprehensive taxonomy of grid systems as well as a detailed analysis of the honeybee colony and its nectar acquisition process (NAP), from the resource scheduling perspective, which have not been presented in any previous work, to the best of our knowledge.
PM-Grid designs and HoPe implementation were evaluated thoroughly through a strictly controlled empirical evaluation framework with a well-established heuristic in high throughput computing, the opportunistic scheduling heuristic (OSH), as a benchmark algorithm. Comparisons with optimal values and worst bounds are conducted to gain a clear insight into HoPe behaviour, in terms of stability, throughput, turnaround time and speedup, under different running conditions of number of jobs and grid scales.
Experimental results demonstrate the superiority of HoPe performance where it
has successfully maintained optimum stability and throughput in more than 95%
of the experiments, with HoPe achieving three times better than the OSH under
extremely heavy loads. Regarding the turnaround time and speedup, HoPe has
effectively achieved less than 50% of the turnaround time incurred by the OSH, while doubling its speedup in more than 60% of the experiments.
These results indicate the potential of both PM-Grids and HoPe in realising futuristic grid visions. Therefore considering the deployment of PM-Grids in real life scenarios and the utilisation of HoPe in other parallel processing and high throughput computing systems are recommended
- …