556,756 research outputs found
Learning First-Order Definitions of Functions
First-order learning involves finding a clause-form definition of a relation
from examples of the relation and relevant background information. In this
paper, a particular first-order learning system is modified to customize it for
finding definitions of functional relations. This restriction leads to faster
learning times and, in some cases, to definitions that have higher predictive
accuracy. Other first-order learning systems might benefit from similar
specialization.Comment: See http://www.jair.org/ for any accompanying file
Teaching Students to Communicate with the Precise Language of Mathematics: A Focus on the Concept of Function in Calculus Courses
The use of precise language is one of the defining characteristics of mathematics that is often missing in mathematics classrooms. This lack of precision results in poorly constructed concepts that limit comprehension of essential mathematical definitions and notation. One important concept that frequently lacks the precision required by mathematics is the concept of function. Functions are foundational in the study undergraduate mathematics and are essential to other areas of modern mathematics. Because of its pivotal role, the concept of function is given particular attention in the three articles that comprise this study.
A unit on functions that focuses on using precise language was developed and presented to a class of 50 first-semester calculus students during the first two weeks of the semester. This unit includes a learning goal, a set of specific objectives, a collection of learning activities, and an end-of-unit assessment. The results of the implementation of this unit and the administration of the assessment indicated that when students were able to construct the concept of function themselves and formulate a formal definition, they had a deeper and more meaningful understanding of the concept.
In order to demonstrate its validity, the assessment was analyzed as to its relevance, reliability, and its test items\u27 effectiveness in discriminating between different levels of achievement. The results of this analysis indicated that the assessment was relevant to both the mathematical content and learning levels indicated by the unit\u27s objectives and had a high level of reliability. Additionally, the test items contained in the assessment had a reasonable level of effectiveness in discriminating between different levels of student achievement
ACL2(ml):machine-learning for ACL2
ACL2(ml) is an extension for the Emacs interface of ACL2. This tool uses
machine-learning to help the ACL2 user during the proof-development. Namely,
ACL2(ml) gives hints to the user in the form of families of similar theorems,
and generates auxiliary lemmas automatically. In this paper, we present the two
most recent extensions for ACL2(ml). First, ACL2(ml) can suggest now families
of similar function definitions, in addition to the families of similar
theorems. Second, the lemma generation tool implemented in ACL2(ml) has been
improved with a method to generate preconditions using the guard mechanism of
ACL2. The user of ACL2(ml) can also invoke directly the latter extension to
obtain preconditions for his own conjectures.Comment: In Proceedings ACL2 2014, arXiv:1406.123
Proof-Pattern Recognition and Lemma Discovery in ACL2
We present a novel technique for combining statistical machine learning for
proof-pattern recognition with symbolic methods for lemma discovery. The
resulting tool, ACL2(ml), gathers proof statistics and uses statistical
pattern-recognition to pre-processes data from libraries, and then suggests
auxiliary lemmas in new proofs by analogy with already seen examples. This
paper presents the implementation of ACL2(ml) alongside theoretical
descriptions of the proof-pattern recognition and lemma discovery methods
involved in it
Invariant Synthesis for Incomplete Verification Engines
We propose a framework for synthesizing inductive invariants for incomplete
verification engines, which soundly reduce logical problems in undecidable
theories to decidable theories. Our framework is based on the counter-example
guided inductive synthesis principle (CEGIS) and allows verification engines to
communicate non-provability information to guide invariant synthesis. We show
precisely how the verification engine can compute such non-provability
information and how to build effective learning algorithms when invariants are
expressed as Boolean combinations of a fixed set of predicates. Moreover, we
evaluate our framework in two verification settings, one in which verification
engines need to handle quantified formulas and one in which verification
engines have to reason about heap properties expressed in an expressive but
undecidable separation logic. Our experiments show that our invariant synthesis
framework based on non-provability information can both effectively synthesize
inductive invariants and adequately strengthen contracts across a large suite
of programs
- …