256 research outputs found
Completeness Results for Parameterized Space Classes
The parameterized complexity of a problem is considered "settled" once it has
been shown to lie in FPT or to be complete for a class in the W-hierarchy or a
similar parameterized hierarchy. Several natural parameterized problems have,
however, resisted such a classification. At least in some cases, the reason is
that upper and lower bounds for their parameterized space complexity have
recently been obtained that rule out completeness results for parameterized
time classes. In this paper, we make progress in this direction by proving that
the associative generability problem and the longest common subsequence problem
are complete for parameterized space classes. These classes are defined in
terms of different forms of bounded nondeterminism and in terms of simultaneous
time--space bounds. As a technical tool we introduce a "union operation" that
translates between problems complete for classical complexity classes and for
W-classes.Comment: IPEC 201
On Measuring Non-Recursive Trade-Offs
We investigate the phenomenon of non-recursive trade-offs between
descriptional systems in an abstract fashion. We aim at categorizing
non-recursive trade-offs by bounds on their growth rate, and show how to deduce
such bounds in general. We also identify criteria which, in the spirit of
abstract language theory, allow us to deduce non-recursive tradeoffs from
effective closure properties of language families on the one hand, and
differences in the decidability status of basic decision problems on the other.
We develop a qualitative classification of non-recursive trade-offs in order to
obtain a better understanding of this very fundamental behaviour of
descriptional systems
Computing with and without arbitrary large numbers
In the study of random access machines (RAMs) it has been shown that the
availability of an extra input integer, having no special properties other than
being sufficiently large, is enough to reduce the computational complexity of
some problems. However, this has only been shown so far for specific problems.
We provide a characterization of the power of such extra inputs for general
problems. To do so, we first correct a classical result by Simon and Szegedy
(1992) as well as one by Simon (1981). In the former we show mistakes in the
proof and correct these by an entirely new construction, with no great change
to the results. In the latter, the original proof direction stands with only
minor modifications, but the new results are far stronger than those of Simon
(1981). In both cases, the new constructions provide the theoretical tools
required to characterize the power of arbitrary large numbers.Comment: 12 pages (main text) + 30 pages (appendices), 1 figure. Extended
abstract. The full paper was presented at TAMC 2013. (Reference given is for
the paper version, as it appears in the proceedings.
Small Universal Accepting Networks of Evolutionary Processors with Filtered Connections
In this paper, we present some results regarding the size complexity of
Accepting Networks of Evolutionary Processors with Filtered Connections
(ANEPFCs). We show that there are universal ANEPFCs of size 10, by devising a
method for simulating 2-Tag Systems. This result significantly improves the
known upper bound for the size of universal ANEPFCs which is 18.
We also propose a new, computationally and descriptionally efficient
simulation of nondeterministic Turing machines by ANEPFCs. More precisely, we
describe (informally, due to space limitations) how ANEPFCs with 16 nodes can
simulate in O(f(n)) time any nondeterministic Turing machine of time complexity
f(n). Thus the known upper bound for the number of nodes in a network
simulating an arbitrary Turing machine is decreased from 26 to 16
Solving the subset-sum problem with a light-based device
We propose a special computational device which uses light rays for solving
the subset-sum problem. The device has a graph-like representation and the
light is traversing it by following the routes given by the connections between
nodes. The nodes are connected by arcs in a special way which lets us to
generate all possible subsets of the given set. To each arc we assign either a
number from the given set or a predefined constant. When the light is passing
through an arc it is delayed by the amount of time indicated by the number
placed in that arc. At the destination node we will check if there is a ray
whose total delay is equal to the target value of the subset sum problem (plus
some constants).Comment: 14 pages, 6 figures, Natural Computing, 200
Exact Cover with light
We suggest a new optical solution for solving the YES/NO version of the Exact
Cover problem by using the massive parallelism of light. The idea is to build
an optical device which can generate all possible solutions of the problem and
then to pick the correct one. In our case the device has a graph-like
representation and the light is traversing it by following the routes given by
the connections between nodes. The nodes are connected by arcs in a special way
which lets us to generate all possible covers (exact or not) of the given set.
For selecting the correct solution we assign to each item, from the set to be
covered, a special integer number. These numbers will actually represent delays
induced to light when it passes through arcs. The solution is represented as a
subray arriving at a certain moment in the destination node. This will tell us
if an exact cover does exist or not.Comment: 20 pages, 4 figures, New Generation Computing, accepted, 200
Computational capabilities of multilayer committee machines
We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < 8) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function
Validation, Deployment, and Real-World Implementation of a Modular Toolbox for Alzheimer’s Disease Detection and Dementia Risk Reduction: The AD-RIDDLE Project
The Real-World Implementation, Deployment, and Validation of Early Detection Tools and Lifestyle Enhancement (AD-RIDDLE) project, recently launched with the support of the EU Innovative Health Initiative (IHI) public-private partnership and UK Research and Innovation (UKRI), aims to develop, test, and deploy a modular toolbox platform that can reduce existing barriers to the timely detection, and therapeutic approaches in Alzheimer’s disease (AD), thus accelerating AD innovation. By focusing on health system and health worker practices, AD-RIDDLE seeks to improve and smooth AD management at and between each key step of the clinical pathway and across the disease continuum, from at-risk asymptomatic stages to early symptomatic ones. This includes innovation and improvement in AD awareness, risk reduction and prevention, detection, diagnosis, and intervention. The 24 partners in the AD-RIDDLE interdisciplinary consortium will develop and test the AD-RIDDLE toolbox platform and its components individually and in combination in six European countries. Expected results from this cross-sectoral research collaboration include tools for earlier detection and accurate diagnosis; validated, novel digital cognitive and blood-based biomarkers; and improved access to individualized preventative interventions (including multimodal interventions and symptomatic/disease-modifying therapies) across diverse populations, within the framework of precision medicine. Overall, AD-RIDDLE toolbox platform will advance management of AD, improving outcomes for patients and their families, and reducing costs
- …