5,780 research outputs found
Recommended from our members
The use of sequencing information in software specification for verification
Software requirements specifications, virtual machine definitions, and algorithmic design all place constraints on the sequence of operations that are permissible during a program's execution. This paper discusses how these constraints can be captured and used to aid in the program verification process. The sequencing constraints can be expressed as a grammar over the alphabet of program operations. Several techniques can be used in support of testing or verification based on these specifications. Dynamic aalysis and static analysis are considered here. The automatic generation of some of these aids is feasible; the means of doing so is described
Systems Technology Laboratory (STL) compendium of utilities
Multipurpose programs, routines and operating systems are described. Data conversion and character string comparison subroutine are included. Graphics packages, and file maintenance programs are also included
Implementing embedded artificial intelligence rules within algorithmic programming languages
Most integrations of artificial intelligence (AI) capabilities with non-AI (usually FORTRAN-based) application programs require the latter to execute separately to run as a subprogram or, at best, as a coroutine, of the AI system. In many cases, this organization is unacceptable; instead, the requirement is for an AI facility that runs in embedded mode; i.e., is called as subprogram by the application program. The design and implementation of a Prolog-based AI capability that can be invoked in embedded mode are described. The significance of this system is twofold: Provision of Prolog-based symbol-manipulation and deduction facilities makes a powerful symbolic reasoning mechanism available to applications programs written in non-AI languages. The power of the deductive and non-procedural descriptive capabilities of Prolog, which allow the user to describe the problem to be solved, rather than the solution, is to a large extent vitiated by the absence of the standard control structures provided by other languages. Embedding invocations of Prolog rule bases in programs written in non-AI languages makes it possible to put Prolog calls inside DO loops and similar control constructs. The resulting merger of non-AI and AI languages thus results in a symbiotic system in which the advantages of both programming systems are retained, and their deficiencies largely remedied
GRAAL - A graph algorithmic language
FORTRAN-based version, FGRAAL, of graph algorithmic language GRAA
A NASA/RAE cooperation in the development of a real-time knowledge-based autopilot
As part of a US/UK cooperative aeronautical research program, a joint activity between the NASA Dryden Flight Research Facility and the Royal Aerospace Establishment on knowledge-based systems was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time knowledge-based systems. The proposed next stage of this research is described, in which some of the problems of implementing and validating a knowledge-based autopilot for a generic high-performance aircraft are investigated
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Lattice Perturbation Theory by Computer Algebra: A Three-Loop Result for the Topological Susceptibility
We present a scheme for the analytic computation of renormalization functions
on the lattice, using a symbolic manipulation computer language. Our first
nontrivial application is a new three-loop result for the topological
susceptibility.Comment: 15 pages + 2 figures (PostScript), report no. IFUP-TH 31/9
- …