1,985 research outputs found
AutoBayes: A System for Generating Data Analysis Programs from Statistical Models
Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but difficult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis system for the generation of data analysis programs from statistical models. A statistical model specifies the properties for each problem variable (i.e., observation or parameter) and its dependencies in the form of a probability distribution. It is a fully declarative problem description, similar in spirit to a set of differential equations. From such a model, AutoBayes generates optimized and fully commented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Code is produced by a schema-guided deductive synthesis process. A schema consists of a code template and applicability constraints which are checked against the model during synthesis using theorem proving technology. AutoBayes augments schema-guided synthesis by symbolic-algebraic computation and can thus derive closed-form solutions for many problems. It is well-suited for tasks like estimating best-fitting model parameters for the given data. Here, we describe AutoBayes's system architecture, in particular the schema-guided synthesis kernel. Its capabilities are illustrated by a number of advanced textbook examples and benchmarks
A study of the very high order natural user language (with AI capabilities) for the NASA space station common module
The requirements are identified for a very high order natural language to be used by crew members on board the Space Station. The hardware facilities, databases, realtime processes, and software support are discussed. The operations and capabilities that will be required in both normal (routine) and abnormal (nonroutine) situations are evaluated. A structure and syntax for an interface (front-end) language to satisfy the above requirements are recommended
The nature and evaluation of commercial expert system building tools, revision 1
This memorandum reviews the factors that constitute an Expert System Building Tool (ESBT) and evaluates current tools in terms of these factors. Evaluation of these tools is based on their structure and their alternative forms of knowledge representation, inference mechanisms and developer end-user interfaces. Next, functional capabilities, such as diagnosis and design, are related to alternative forms of mechanization. The characteristics and capabilities of existing commercial tools are then reviewed in terms of these criteria
CLP-based protein fragment assembly
The paper investigates a novel approach, based on Constraint Logic
Programming (CLP), to predict the 3D conformation of a protein via fragments
assembly. The fragments are extracted by a preprocessor-also developed for this
work- from a database of known protein structures that clusters and classifies
the fragments according to similarity and frequency. The problem of assembling
fragments into a complete conformation is mapped to a constraint solving
problem and solved using CLP. The constraint-based model uses a medium
discretization degree Ca-side chain centroid protein model that offers
efficiency and a good approximation for space filling. The approach adapts
existing energy models to the protein representation used and applies a large
neighboring search strategy. The results shows the feasibility and efficiency
of the method. The declarative nature of the solution allows to include future
extensions, e.g., different size fragments for better accuracy.Comment: special issue dedicated to ICLP 201
Recommended from our members
Modelling student errors in physics problem-solving
The motivation for this work has been the development of knowledge about the behaviour of human problem-solvers that would enable an intelligent machine tutor to be designed. In the domain of Newtonian Mechanics, this breaks down into two necessary sub-tasks; how do people decide what equation to generate; and what do they produce when they do try to generate an equation? Although these are psychologically separate questions, an automatic tutor for the domain would need to make use of both kinds of knowledge.
Therefore, strategies for controlling search in physics problem-solving are investigated, and a computational model of erroneous solutions is described. Experimental data is used to evaluate the model. Errors in the domain are classified, and the behaviour of problem-solvers predicted under certain circumstances.
Prediction of Novice errors is a crucial ability for an intelligent tutorial system, and the error analysis implemented in the NEWT program is the main contribution of this thesis.
The investigation has two principal aims:
(1) To develop a model that allows a student's future behaviour to be predicted from an analysis of his past actions. It is argued that this is a necessary prerequisite for the construction of an intelligent tutorial system.
(2) To identify the psychological mechanisms used by problem-solvers working in the domain.
The thesis attempts to achieve these aims in two main ways:
(1) A computer program called NEWT has been constructed, which solves problems of Newtonian Mechanics correctly, or in one of a number of erroneous ways. This allows human errors to be matched, classified, and in some cases predicted.
(2) An analysis of published data leads to the formulation of a control strategy termed "planstacking". This is compared to alternative control strategies, and shown to explain existing data more adequately.
The program is evaluated both as a psychological theory, and as a proposed student model for use in a computer-based tutorial system. The NEWT program was developed from the MECHO program written by Bundy, Byrd, Luger, Mellish and Palmer (1979), at the Department of Artificial Intelligence, Edinburgh University. This program was adapted to produce erroneous problem solutions by the inclusion of procedures to implement malrules observed in the domain
An overview of decision table literature 1982-1995.
This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
Recommendations for NASA research and development in artificial intelligence
Basic artificial intelligence (AI) research, AI applications, engineering, institutional management, and previously impractical missions enabled by AI are discussed
A Universal Multiphase Mission Execution Automaton (MEA) with Prolog Implementation for Unmanned Untethered Vehicles", Proceedings of 17th International Symposium on Unmanned Untethered Submersible Technology, Portsmouth, NH, August 2011.
Proceedings of the 17th International Symposium on Unmanned Untethered Submersible Technology, Portsmouth,
NH, August, 2011
- …