686 research outputs found
Evolution with Drifting Targets
We consider the question of the stability of evolutionary algorithms to
gradual changes, or drift, in the target concept. We define an algorithm to be
resistant to drift if, for some inverse polynomial drift rate in the target
function, it converges to accuracy 1 -- \epsilon , with polynomial resources,
and then stays within that accuracy indefinitely, except with probability
\epsilon , at any one time. We show that every evolution algorithm, in the
sense of Valiant (2007; 2009), can be converted using the Correlational Query
technique of Feldman (2008), into such a drift resistant algorithm. For certain
evolutionary algorithms, such as for Boolean conjunctions, we give bounds on
the rates of drift that they can resist. We develop some new evolution
algorithms that are resistant to significant drift. In particular, we give an
algorithm for evolving linear separators over the spherically symmetric
distribution that is resistant to a drift rate of O(\epsilon /n), and another
algorithm over the more general product normal distributions that resists a
smaller drift rate.
The above translation result can be also interpreted as one on the robustness
of the notion of evolvability itself under changes of definition. As a second
result in that direction we show that every evolution algorithm can be
converted to a quasi-monotonic one that can evolve from any starting point
without the performance ever dipping significantly below that of the starting
point. This permits the somewhat unnatural feature of arbitrary performance
degradations to be removed from several known robustness translations
Asynchronous Graph Pattern Matching on Multiprocessor Systems
Pattern matching on large graphs is the foundation for a variety of
application domains. Strict latency requirements and continuously increasing
graph sizes demand the usage of highly parallel in-memory graph processing
engines that need to consider non-uniform memory access (NUMA) and concurrency
issues to scale up on modern multiprocessor systems. To tackle these aspects,
graph partitioning becomes increasingly important. Hence, we present a
technique to process graph pattern matching on NUMA systems in this paper. As a
scalable pattern matching processing infrastructure, we leverage a
data-oriented architecture that preserves data locality and minimizes
concurrency-related bottlenecks on NUMA systems. We show in detail, how graph
pattern matching can be asynchronously processed on a multiprocessor system.Comment: 14 Pages, Extended version for ADBIS 201
Recommended from our members
A Neuroidal Architecture for Cognitive Computation
An architecture is described for designing systems that acquire and manipulate large amounts of unsystematized, or so-called commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and maintenance of robust knowledge bases. The architecture makes explicit the requirements on the basic computational tasks that are to be performed and is designed to make these computationally tractable even for very large databases. The main claims are that (i) the basic learning tasks are tractable and (ii) tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are entirely programmed. In particular, attribute efficiency holds a central place in the definition of the learning tasks, as does also the capability to handle relational information efficiently. Among the issues that learning offers to resolve are robustness to inconsistencies, robustness to incomplete information and resolving among alternatives.Engineering and Applied Science
Efficient solvability of Hamiltonians and limits on the power of some quantum computational models
We consider quantum computational models defined via a Lie-algebraic theory.
In these models, specified initial states are acted on by Lie-algebraic quantum
gates and the expectation values of Lie algebra elements are measured at the
end. We show that these models can be efficiently simulated on a classical
computer in time polynomial in the dimension of the algebra, regardless of the
dimension of the Hilbert space where the algebra acts. Similar results hold for
the computation of the expectation value of operators implemented by a
gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field
Hamiltonians and show that they are efficiently ("exactly") solvable by means
of a Jacobi-like diagonalization method. Our results generalize earlier ones on
fermionic linear optics computation and provide insight into the source of the
power of the conventional model of quantum computation.Comment: 6 pages; no figure
Quantum computers that can be simulated classically in polynomial time
A model of quantum computation based on unitary ma-trix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a signi cant class of these quantum computations can be sim-ulated classically in polynomial time. In particular we show that two-bit operations characterized by 4 4 matrices in which the sixteen entries obey a set of ve polynomial re-lations can be composed according to certain rules to yield a class of circuits that can be simulated classically in poly-nomial time. This contrasts with the known universality of two-bit operations, and demonstrates that eĂcient quan-tum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. In other words it is possible that quantum phenomena can be used in a scal-able fashion to make computers but that they do not have superpolynomial speedups compared to Turing machines for any problem. The techniques introduced bring the quantum computational model within the realm of algebraic complex-ity theory. In a manner consistent will one view of quan-tum physics, the wave function is simulated deterministi-cally, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a dierent direction these techniques also yield determinis-tic polynomial time algorithms for the decision and parity problems for certain classes of read-twice Boolean formulae. All our results are based on the use of gates that are dened in terms of their graph matching properties. 1. BACKGROUND The now classical theory of computational complexity is based on the computational model proposed by Turing[30] augmented in two ways: On the one hand random oper
What must a global theory of cortex explain
At present there is no generally accepted theory of how cognitive phenomena arise from computations in cortex. Further, there is no consensus on how the search for one should be refocussed so as to make it more fruitful. In this short piece we observe that research in computer science over the last several decades has shown that significant computational phenomena need to circumvent significant inherent quantitative impediments, such as of computational complexity. We argue that computational neuroscience has to be informed by the same quantitative concerns for it to succeed. It is conceivable that the brain is the one computation that does not need to circumvent any such obstacles, but if that were the case then quantitatively plausible theories of cortex would now surely abound and be driving experimental investigations. Introduction That computing is the right framework for understanding the brain became clear to many soon after the discovery of universal computing by Turing [1], who was himself motivated by the question of understanding the scope of human mental activity. McCulloch and Pitts [2] made a first attempt to formalize neural computation, pointing out that their networks were of equivalent expressive power to Turing machines. By the 1950s it was widely recognized that any science of cognition would have to be based on computation. It would probably come as a shock to the earliest pioneers, were they to return today, that more progress has not been made towards a generally agreed computational theory of cortex. They may have expected, short of such a generally agreed theory, that today there would at least exist a variety of viable competing theories. Understanding cortex is surely among the most important questions ever posed by science. Astonishingly, the question of proposing general theories of cortex and subjecting them to experimental examination is currently not even a mainstream scientific activity. Our review here is informed by the observation that since Marr's time computer science has made very substantial progress in certain quantitative directions. The following four phenomena are clearly critical for the brain: communication, computation, learning and evolution. Over the last few decades all four have been subject to quantitative analysis, and are now known to be subject to hard quantitative constraints (see We do not believe that there can be any doubt that the theory sought has to be computational in the general sense of Turing. The question that arises is: In what way does Marr's articulation of the computational approach fall short? Our answer is that, exactly as in any other domains of computation, a successful theory will have to show additionally, how the quantitative challenges that need to be faced are solved in cortex. If these challenges were nonexistent or insignificant then plausible theories would now abound and the only task remaining for us would be to establish which one nature is using. This augmented set of requirements is quite complex in that many issues have to be faced simultaneously. We suggest the following as a streamlined working formulation for the present: (i) Specify a candidate set of quantitatively challenging cognitive tasks that cortex may be using as the primitives from which it builds cognition. At a minimum, this set has to include the task of memorization, and some additional tasks that use the memories created. The task set needs to encompass both the learning and the execution of the capabilities in question. (ii) Explain how, on a model of computation that faithfully reflects the quantitative resources that cortex has available, instances of these tasks can be realized by explicit algorithms. (iii) Provide some plausible experimental approach to confirming or falsifying the theory as it applies to cortex. (iv) Explain how there may be an evolutionary path to the brain having acquired these capabilities. To illustrate that this complex of requirements can be pursued systematically together we shall briefly describe the framework developed for this by the author Positive representations In order to specify computational tasks in terms of inputoutput behavior one needs to start with a representation for each task. It is necessary to ensure that for any pair of tasks where the input of one is the output of the other there is a common representation at that interface. Here we shall take the convenient course of having a common representation for all the tasks that will be considered, so that their composability will follow. In a positive representation [5] a real world item (a concept, event, individual, etc.) is represented by a set S of r neurons. A concept being processed corresponds to the members of S firing in a distinct way. More precisely, as elaborated further in Positive representations come in two varieties, disjoint, which means that the S's of distinct concepts are disjoint, and shared, which means that the S's can share neurons. Disjointness makes computation easier but requires small r (such as r = 50) if large numbers of concepts are to be represented. The shared representation allows for more concepts to be represented (especially necessary if r is very large, such as several percent of the total number of neurons) but can be expected to make computation, without interference among the task instances, more challenging. Random access versus local tasks We believe that cortex is communication bounded in the sense that: (i) each neuron is connected to a minute fraction of all the other neurons, (ii) each individual synapse typically has weak influence, in that a presynaptic action potential will make only a small contribution to the threshold potential needed to be overcome in the postsynaptic cell, and (iii) there is no global addressing mechanism as computers have. We call tasks that potentially require communication between arbitrary memorized concepts random-access tasks. Such tasks, for example, an association between an arbitrary pair of concepts, are the most demanding in communication and therefore quantitatively the most challenging for the brain to realize. The arbitrary knowledge structures in the world will have to be mapped, by the execution of a sequence of random access tasks that only change synaptic weights, to the available connections among the neurons that are largely fixed at birth. We distinguish between two categories of tasks. Tasks from the first category assign neurons to a new item. We have just one task of this type, which we call Hierarchical Memorization and define it as follows: For any stored items A, B, allocate neurons to new item C and make appropriate changes in the circuit so that in future A and B active will cause C to be active also. The second category of tasks make modifications to the circuits so as to relate in a new way items to which neurons have been already assigned. We consider the following three. Association: For any stored items A, B, change the www.sciencedirect.com circuit so that in future when A is active then B will be caused to be also. Supervised Memorization of Conjunctions: For stored items A, B, C change the circuits so that in future A and B active will cause C to be active also. Inductive Learning of Simple Threshold Functions: for one stored item A learn a criterion in terms of the others. This third operation is the one that achieves generalization, in that appropriate performance even on inputs never before seen is expected. The intention is that any new item to be stored will be stored in the first instance as a conjunction of items previously memorized (which may be visual, auditory, conceptual, etc.) Once an item has neurons allocated, it becomes an equal citizen with items previously stored in its ability to become a constituent in future actions. These actions can be the creation of further concepts using the hierarchical memorization operation, or establishing relationships among the items stored using one of the operations of the second kind, such as association. The latter operations can be regarded as the workhorses of the cognitive system, building up complex data structures reflecting the relations that exist in the world among the items represented. However, each such operation requires each item it touches to have been allocated in the first instance by a task of the first kind. Random access tasks are the most appropriate for our study here since, almost by definition, they are the most challenging for any communication bound system. For tasks that require only local communication, such as aspects of low-level vision, viable computational solutions may be more numerous, and quantitative studies may be less helpful in identifying the one nature has chosen. We emphasize that for the candidate set it is desirable to target from the start a mixed set of different task types as here, since such sets are more likely to form a sufficient set of primitives for cognition. Previous approaches have often focused on a single task type The neuroidal model Experience in computer science suggests that models of computation need to be chosen carefully to fit the problem at hand. The criterion of success is the ultimate usefulness of the model in illuminating the relevant phenomena. In neuroscience we will, no doubt, ultimately need a variety of models at different levels. The neuroidal model is designed to explicate phenomena around the random access tasks we have described, where the constraints are dictated by the gross communication constraints on cortex rather than the detailed computations inside neurons. The neuroidal model has three main numerical parameters: n, the number of neurons, d the number of connections per neuron, and k, the minimum number of presynaptic neurons needed to cause an action potential in a postsynaptic neuron (in other words the maximum synaptic strength is 1/k times the neuron threshold). Each neuron can be in one of a finite number of states and each synapse has some strength. These states and strengths are updated according to purely local rules using computationally weak steps. Each update will be influenced by the firing pattern of the presynaptic neurons according to a function that is symmetric in those inputs. There is a weak timing mechanism that allows the neurons to count time accurately enough so stay synchronized with other neurons for a few steps
Static Data Structure Lower Bounds Imply Rigidity
We show that static data structure lower bounds in the group (linear) model
imply semi-explicit lower bounds on matrix rigidity. In particular, we prove
that an explicit lower bound of on the cell-probe
complexity of linear data structures in the group model, even against
arbitrarily small linear space , would already imply a
semi-explicit () construction of rigid matrices with
significantly better parameters than the current state of art (Alon, Panigrahy
and Yekhanin, 2009). Our results further assert that polynomial () data structure lower bounds against near-optimal space, would
imply super-linear circuit lower bounds for log-depth linear circuits (a
four-decade open question). In the succinct space regime , we show
that any improvement on current cell-probe lower bounds in the linear model
would also imply new rigidity bounds. Our results rely on a new connection
between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak,
2006), and on a new reduction from worst-case to average-case rigidity, which
is of independent interest
Recommended from our members
The broad spectrum antiviral compound ST-669 affects vesicular trafficking in Chlamydia-infected cells
Our laboratory is exploring the mechanism of action of a novel broad spectrum antiviral, ST-669. This compound has activity against a variety of different viruses and the obligate intracellular bacterium, Chlamydia. In this study, we explored the effects of ST-669 when the cell cycle of the host cells was altered. Chlamydia spp. were grown in Vero cells in the presence of various compounds that inhibit the eukaryotic cell cycle, and examined for inclusion structure and production of bacteria. ST-669 was used to examine the inclusion structure of C. caviae: when treated with ST-669, C. caviae appears to fuse vacuoles to form an inclusion with a single lobe. Most of the cell cycle inhibitors did not alter the anti-chlamydial effects of ST-669. However, treatment of infected cells with vincristine led to an increase in bacteria production and change in inclusion morphology in the presence of ST-669. It is hypothesized that a protein that is a target of ST-669 is differently present or activated when Vero cells were treated with vincristine. These results open the door for future proteomic studies that might elucidate ST-669âs mechanism of action.Keywords: ST-669, Anti-chlamydial, inclusion, ChlamydiaKeywords: ST-669, Anti-chlamydial, inclusion, Chlamydi
- âŠ