338 research outputs found
Humans decompose tasks by trading off utility and computational cost
Human behavior emerges from planning over elaborate decompositions of tasks
into goals, subgoals, and low-level actions. How are these decompositions
created and used? Here, we propose and evaluate a normative framework for task
decomposition based on the simple idea that people decompose tasks to reduce
the overall cost of planning while maintaining task performance. Analyzing
11,117 distinct graph-structured planning tasks, we find that our framework
justifies several existing heuristics for task decomposition and makes
predictions that can be distinguished from two alternative normative accounts.
We report a behavioral study of task decomposition () that uses 30
randomly sampled graphs, a larger and more diverse set than that of any
previous behavioral study on this topic. We find that human responses are more
consistent with our framework for task decomposition than alternative normative
accounts and are most consistent with a heuristic -- betweenness centrality --
that is justified by our approach. Taken together, our results provide new
theoretical insight into the computational principles underlying the
intelligent structuring of goal-directed behavior
Integrating Testing and Interactive Theorem Proving
Using an interactive theorem prover to reason about programs involves a
sequence of interactions where the user challenges the theorem prover with
conjectures. Invariably, many of the conjectures posed are in fact false, and
users often spend considerable effort examining the theorem prover's output
before realizing this. We present a synergistic integration of testing with
theorem proving, implemented in the ACL2 Sedan (ACL2s), for automatically
generating concrete counterexamples. Our method uses the full power of the
theorem prover and associated libraries to simplify conjectures; this
simplification can transform conjectures for which finding counterexamples is
hard into conjectures where finding counterexamples is trivial. In fact, our
approach even leads to better theorem proving, e.g. if testing shows that a
generalization step leads to a false conjecture, we force the theorem prover to
backtrack, allowing it to pursue more fruitful options that may yield a proof.
The focus of the paper is on the engineering of a synergistic integration of
testing with interactive theorem proving; this includes extending ACL2 with new
functionality that we expect to be of general interest. We also discuss our
experience in using ACL2s to teach freshman students how to reason about their
programs.Comment: In Proceedings ACL2 2011, arXiv:1110.447
- …