450 research outputs found
Recommended from our members
MERCURY: Distributed Incremental Attribute Grammar Evaluation
This technical report consists of the two most recent papers from the MERCURY project Multiuser, Distributed Language-Based Environments explains the application of incremental attribute grammar evaluation algorithms to generation of distributed programming environments and describes the implementation of the MERCURY system. Version and Configuration Control in Distributed Language-Based Environments presents new algorithms that permit MERCURY to support multiple versions and configurations of modules and to more efficiently propagate changes to aggregate attributes
Recommended from our members
Incremental Attribute Evaluation for Multi-User Semantics-Based Editors
This thesis addresses two fundamental problems associated with performing incremental attribute evaluation in multi-user editors based on the attribute grammar formalism: (1) multiple asynchronous modifications of the attributed derivation tree, and (2) segmentation of the tree into separate modular units. Solutions to these problems make it possible to construct semantics-based editors for use by teams of programmers developing or maintaining large software systems. Multi-user semantics based editors improve software productivity by reducing communication costs and snafus. The objectives of an incremental attribute evaluation algorithm for multiple asynchronous changes are that (a) all attributes of the derivation tree have correct values when evaluation terminates, and (b) the cost of evaluating attributes necessary to reestablish a correctly attributed derivation tree is minimized. We present a family of algorithms that differ in how they balance the tradeoff between algorithm efficiency and expressiveness of the attribute grammar. This is important because multi-user editors seem a practical basis for many areas of computer-supported cooperative work, not just programming. Different application areas may have distinct definitions of efficiency, and may impose different requirements on the expressiveness of the attribute grammar. The characteristics of the application domain can then be used to select the most efficient strategy for each particular editor. To address the second problem, we define an extension of classical attribute grammars that allows the specification of interface consistency checking for programs composed of many modules. Classical attribute grammars can specify the static semantics of monolithic programs or modules, but not inter-module semantics; the latter was done in the past using ad hoc techniques. Extended attribute grammars support programming-in-the-large constructs found in real programming languages, including textual inclusion, multiple kinds of modular units and nested modular units. We discuss attribute evaluation in the context of programming-in-the-large, particularly the separation of concerns between the local evaluator for each modular unit and the global evaluator that propagates attribute flows across module boundaries. The result is a uniform approach to formal specification of both intra-module and inter-module static semantic properties, with the ability to use attribute evaluation algorithms to carry out a complete static semantic analysis of a multi-module program
A change-oriented architecture for mathematical authoring assistance
The computer-assisted authoring of mathematical documents using a scientific text-editor requires new mathematical knowledge management and transformation techniques to organize the overall workflow of anassistance system like the ΩMEGAsystem.The challenge is that, throughout the system, various kinds of given and derived knowledge units occur in different formats and with different dependencies. If changes occur in these pieces of knowledge, they need to be effectively propagated. We present a Change-Oriented Architecture for mathematical authoring assistance. Thereby, documents are used as interfaces and the components of the architecture interact by actively changing the interface documents and by reacting on changes. In order to optimize this style of interaction, we present two essential methods in this thesis. First, we develop an efficient method for the computation of weighted semantic changes between two versions of a document. Second, we present an invertible grammar formalism for the automated bidirectional transformation between interface documents. The presented architecture provides an adequate basis for the computer-assisted authoring of mathematical documents with semantic annotations and a controlled mathematical language
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Une approche par boosting à la sélection de modèles pour l’analyse syntaxique statistique
International audienceIn this work we present our approach to model selection for statistical parsing via boosting. The method is used to target the inefficiency of current feature selection methods, in that it allows a constant feature selection time at each iteration rather than the increasing selection time of current standard forward wrapper methods. With the aim of performing feature selection on very high dimensional data, in particular for parsing morphologically rich languages, we test the approach, which uses the multiclass AdaBoost algorithm SAMME (Zhu et al., 2006), on French data from the French Treebank, using a multilingual discriminative constituency parser (Crabbé, 2014). Current results show that the method is indeed far more efficient than a naïve method, and the performance of the models produced is promising, with F-scores comparable to carefully selected manual models. We provide some perspectives to improve on these performances in future work
Une approche par boosting à la sélection de modèles pour l’analyse syntaxique statistique
International audienceIn this work we present our approach to model selection for statistical parsing via boosting. The method is used to target the inefficiency of current feature selection methods, in that it allows a constant feature selection time at each iteration rather than the increasing selection time of current standard forward wrapper methods. With the aim of performing feature selection on very high dimensional data, in particular for parsing morphologically rich languages, we test the approach, which uses the multiclass AdaBoost algorithm SAMME (Zhu et al., 2006), on French data from the French Treebank, using a multilingual discriminative constituency parser (Crabbé, 2014). Current results show that the method is indeed far more efficient than a naïve method, and the performance of the models produced is promising, with F-scores comparable to carefully selected manual models. We provide some perspectives to improve on these performances in future work
- …