7,133 research outputs found
A motivational model of BCI-controlled heuristic search
Several researchers have proposed a new application for human augmentation, which is to provide human supervision to autonomous artificial intelligence (AI) systems. In this paper, we introduce a framework to implement this proposal, which consists of using Brain–Computer Interfaces (BCI) to influence AI computation via some of their core algorithmic components, such as heuristic search. Our framework is based on a joint analysis of philosophical proposals characterising the behaviour of autonomous AI systems and recent research in cognitive neuroscience that support the design of appropriate BCI. Our framework is defined as a motivational approach, which, on the AI side, influences the shape of the solution produced by heuristic search using a BCI motivational signal reflecting the user’s disposition towards the anticipated result. The actual mapping is based on a measure of prefrontal asymmetry, which is translated into a non-admissible variant of the heuristic function. Finally, we discuss results from a proof-of-concept experiment using functional near-infrared spectroscopy (fNIRS) to capture prefrontal asymmetry and control the progression of AI computation of traditional heuristic search problems
Recommended from our members
Evolving structure-function mappings in cognitive neuroscience using genetic programming
A challenging goal of psychology and neuroscience is to map cognitive functions onto neuroanatomical structures. This paper shows how computational methods based upon evolutionary algorithms can facilitate the search for satisfactory mappings by efficiently combining constraints from neuroanatomy and physiology (the structures) with constraints from behavioural experiments (the functions). This methodology involves creation of a database coding for known neuroanatomical and physiological constraints, for mental programs made of primitive cognitive functions, and for typical experiments with their behavioural results. The evolutionary algorithms evolve theories mapping structures to functions in order to optimize the fit with the actual data. These theories lead to new, empirically testable predictions. The role of the prefrontal cortex in humans is discussed as an example. This methodology can be applied to the study of structures or functions alone, and can also be used to study other complex systems.
(This article does not exactly replicate the final version published in the Journal of Swiss Psychology. It is not a copy of the original published article and is not suitable for citation.
Efficient Implementation of a Synchronous Parallel Push-Relabel Algorithm
Motivated by the observation that FIFO-based push-relabel algorithms are able
to outperform highest label-based variants on modern, large maximum flow
problem instances, we introduce an efficient implementation of the algorithm
that uses coarse-grained parallelism to avoid the problems of existing parallel
approaches. We demonstrate good relative and absolute speedups of our algorithm
on a set of large graph instances taken from real-world applications. On a
modern 40-core machine, our parallel implementation outperforms existing
sequential implementations by up to a factor of 12 and other parallel
implementations by factors of up to 3
Perron vector optimization applied to search engines
In the last years, Google's PageRank optimization problems have been
extensively studied. In that case, the ranking is given by the invariant
measure of a stochastic matrix. In this paper, we consider the more general
situation in which the ranking is determined by the Perron eigenvector of a
nonnegative, but not necessarily stochastic, matrix, in order to cover
Kleinberg's HITS algorithm. We also give some results for Tomlin's HOTS
algorithm. The problem consists then in finding an optimal outlink strategy
subject to design constraints and for a given search engine.
We study the relaxed versions of these problems, which means that we should
accept weighted hyperlinks. We provide an efficient algorithm for the
computation of the matrix of partial derivatives of the criterion, that uses
the low rank property of this matrix. We give a scalable algorithm that couples
gradient and power iterations and gives a local minimum of the Perron vector
optimization problem. We prove convergence by considering it as an approximate
gradient method.
We then show that optimal linkage stategies of HITS and HOTS optimization
problems verify a threshold property. We report numerical results on fragments
of the real web graph for these search engine optimization problems.Comment: 28 pages, 5 figure
- …