19 research outputs found
Interpolation in Valiant's theory
We investigate the following question: if a polynomial can be evaluated at
rational points by a polynomial-time boolean algorithm, does it have a
polynomial-size arithmetic circuit? We argue that this question is certainly
difficult. Answering it negatively would indeed imply that the constant-free
versions of the algebraic complexity classes VP and VNP defined by Valiant are
different. Answering this question positively would imply a transfer theorem
from boolean to algebraic complexity. Our proof method relies on Lagrange
interpolation and on recent results connecting the (boolean) counting hierarchy
to algebraic complexity classes. As a byproduct we obtain two additional
results: (i) The constant-free, degree-unbounded version of Valiant's
hypothesis that VP and VNP differ implies the degree-bounded version. This
result was previously known to hold for fields of positive characteristic only.
(ii) If exponential sums of easy to compute polynomials can be computed
efficiently, then the same is true of exponential products. We point out an
application of this result to the P=NP problem in the Blum-Shub-Smale model of
computation over the field of complex numbers.Comment: 13 page
Recommended from our members
Instance-based prediction of real-valued attributes
Instance-based representations have been applied to numerous classification tasks with a fair amount of success. These tasks predict a symbolic class based on observed attributes. This paper presents a method for predicting a numeric value based on observed attributes. We prove that if the numeric values are generated by continuous functions with bounded slope, then the predicted values are accurate approximations of the actual values. We demonstrate the utility of this approach by comparing it with standard approaches for value-prediction. The approach requires no background knowledge
Recommended from our members
A study of instance-based algorithms for supervised learning tasks : mathematical, empirical, and psychological evaluations
This dissertation introduces a framework for specifying instance-based algorithms that can solve supervised learning tasks. These algorithms input a sequence of instances and yield a partial concept description, which is represented by a set of stored instances and associated information. This description can be used to predict values for subsequently presented instances. The thesis of this framework is that extensional concept descriptions and lazy generalization strategies can support efficient supervised learning behavior.The instance-based learning framework consists of three components. The pre-processor component transforms an instance into a more palatable form for the performance component, which computes the instance's similarity with a set of stored instances and yields a prediction for its target value(s). Therefore, the similarity and prediction functions impose generalizations on the stored instances to inductively derive predictions. The learning component assesses the accuracy of these prediction(s) and updates partial concept descriptions to improve their predictive accuracy.This framework is evaluated in four ways. First, its generality is evaluated by mathematically determining the classes of symbolic concepts and numeric functions that can be closely approximated by IB_1, a simple algorithm specified by this framework. Second, this framework is empirically evaluated for its ability to specify algorithms that improve IB_1's learning efficiency. Significant efficiency improvements are obtained by instance-based algorithms that reduce storage requirements, tolerate noisy data, and learn domain-specific similarity functions respectively. Alternative component definitions for these algorithms are empirically analyzed in a set of five high-level parameter studies. Third, this framework is evaluated for its ability to specify psychologically plausible process models for categorization tasks. Results from subject experiments indicate a positive correlation between a models' ability to utilize attribute correlation information and its ability to explain psychological phenomena. Finally, this framework is evaluated for its ability to explain and relate a dozen prominent instance-based learning systems. The survey shows that this framework requires only slight modifications to fit these highly diverse systems. Relationships with edited nearest neighbor algorithms, case-based reasoners, and artificial neural networks are also described
The complexity of counting edge colorings and a dichotomy for some higher domain Holant problems
We show that an effective version of Siegelâs Theorem on finiteness of integer solutions and an application of elementary Galois theory are key ingredients in a complexity classification of some Holant problems. These Holant problems, denoted by Holant(f), are defined by a symmetric ternary function f that is invariant under any permutation of the Îș â„ 3 domain elements. We prove that Holant(f) exhibits a complexity dichotomy. This dichotomy holds even when restricted to planar graphs. A special case of this result is that counting edge Îș-colorings is #P-hard over planar 3-regular graphs for Îș â„ 3. In fact, we prove that counting edge Îș-colorings is #P-hard over planar r-regular graphs for all Îș â„ r â„ 3. The problem is polynomial-time computable in all other parameter settings. The proof of the dichotomy theorem for Holant(f) depends on the fact that a specific polynomial p(x, y) has an explicitly listed finite set of integer solutions, and the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials.
A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)
We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system.
Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants.
We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome.
Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces
A Holant Dichotomy: Is the FKT Algorithm Universal?
We prove a complexity dichotomy for complex-weighted Holant problems with an
arbitrary set of symmetric constraint functions on Boolean variables. This
dichotomy is specifically to answer the question: Is the FKT algorithm under a
holographic transformation a \emph{universal} strategy to obtain
polynomial-time algorithms for problems over planar graphs that are intractable
in general? This dichotomy is a culmination of previous ones, including those
for Spin Systems, Holant, and #CSP. A recurring theme has been that a
holographic reduction to FKT is a universal strategy. Surprisingly, for planar
Holant, we discover new planar tractable problems that are not expressible by a
holographic reduction to FKT.
In previous work, an important tool was a dichotomy for #CSP^d, which denotes
#CSP where every variable appears a multiple of d times. However its proof
violates planarity. We prove a dichotomy for planar #CSP^2. We apply this
planar #CSP^2 dichotomy in the proof of the planar Holant dichotomy.
As a special case of our new planar tractable problems, counting perfect
matchings (#PM) over k-uniform hypergraphs is polynomial-time computable when
the incidence graph is planar and k >= 5. The same problem is #P-hard when k=3
or k=4, which is also a consequence of our dichotomy. When k=2, it becomes #PM
over planar graphs and is tractable again. More generally, over hypergraphs
with specified hyperedge sizes and the same planarity assumption, #PM is
polynomial-time computable if the greatest common divisor of all hyperedge
sizes is at least 5.Comment: 128 pages, 36 figure
Prior knowledge and statistical models of learning
The research reported here describes the effects of prior knowledge on how people form categories and learn continuous mappings. Chapter 2 is a review of the past research on knowledge effects in the statistical and psychological literature. Chapter 3 presents simulations of a set of experiments carried out by Heit and Bott (2000) into how knowledge is selected in a category learning task. The model was shown to account for the results of Heit and Bott and generate several new predictions concerning blocking effects with the use of prior knowledge. However, empirical testing of these predictions failed to demonstrate these effects. Chapter 4 describes work testing Delosh, McDaniel and Busemeyer's (1997) model of function learning, the Extrapolation Associative Learning Model (EXAM). Experiments were carried out demonstrating that a model that assumes only linear extrapolation, such as EXAM, is inadequate as a generic model of function learning. An alternative model to EXAM is presented which is constructed of several components, each module applying different quantities of prior knowledge to the task. Chapter 5 presents experiments investigating the extent to which participants abstract and apply functions in transfer-tasks. The results demonstrate that models of function learning must be able to restrict their range of allowable solutions in psychologically plausible ways
Randomised algorithms for counting and generating combinatorial structures
SIGLEAvailable from British Library Document Supply Centre- DSC:D85048 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Differences between machine and human testing of shock absorbing systems.
This thesis documents a study on the sources of the differences found between results from machine and walking testing of shock absorbing systems. A complex programme of experiments was conducted at the Institute of Biomechanics of Valencia to explore the four most outstanding statements proposed with this respect:
1. - No accurate simulation of impacts by machine test. This was investigated by comparing results from testing materials simulating impact forces with results from walking tests.
2. - In use materials degrade and their properties change and existing machine testing methods could not replicate material properties during walking. A new testing method was developed to measure the recovery ability of materials by simulating plantar pressures and results compared with walking tests.
3. - Shoe effect on walking kinematics and heel pad confinement has greater influence on shock absorption than material properties. An instrumented pendulum was developed to study the heel pad. Insole materials were evaluated in walking tests, in pendulum tests and in different machine testing including the new method developed simulating plantar pressures and the results compared.
4. - Accommodation to impact conditions occurs according to a controlled proprioceptive feedback model. Accommodation, impact perception, comfort, walking and passive biomechanical variables and material properties were studied in relation to system's input, output and goal.
Accurate simulation of impacts improved the ability of machine test to predict the walking performance of materials, but not upper body shock transmission. Properties of materials such as recovery ability, stiffness and hardness play an important role in concepts and passive interaction but mainly by influencing accommodation. Accommodation was identified as the source of differences of results between machine and walking tests of shock absorbing materials. The human body was described as comprising two independent mechanical systems: One system, governed by the elasticity and hardness of materials, it is defined by impact forces and accelerations that are inversely related to upper body transmission and control the perceived impact through foot position and knee bend. The other system is defined by heel pad stiffness, insole properties at initial loading and passive interaction that regulate upper body shock transmission by ankle inversion for comfort control. Passive interaction is defined in this thesis as the mechanical coupling between insole and heel pad that determines the properties of the system either through heel pad confinement or compression. Machine tests appear to predict results with respect to the first system but not the second, which required passive human testing.
For insole use, high-energy absorption materials are preferred. These are capable of increasing elastic deformation to reduce impact forces and accelerations without increasing initial-maximal stiffness by passive interaction thus avoiding any increase of head transmission due to accommodation. Heel pad properties were described by three mechanical components accounting for 93.08% of total variance: These are an elastic component, a viscoelastic component and a component related to elastic deformation at low stiffness. Differences were found between shod and barefoot test results. With barefoot there was an initial low stiffness (18-50 kNm*1) response that was not evident in the shod tests which showed elastic deformation
related to final stiffness. With barefoot, the elastic component accounted for impact forces variance (> 70%) and initial deformation component for peak force time (> 60%), while shod impact forces were related mainly to the elastic deformation component (> 60%) whereas rate of loading and acceleration were related to the initial-maximal stiffness component (>20%). Differences in heel pad mechanics due to age, gender and obesity were observed. Although the heel pad properties degraded with age, losses appeared to be compensated by obesity