31 research outputs found
Automatically correcting syntactic and semantic errors in ATL transformations using multi-objective optimization
L’ingénierie dirigée par les modèles (EDM) est un paradigme de développement logiciel
qui promeut l’utilisation de modèles en tant qu’artefacts de première plan et de processus
automatisés pour en dériver d’autres artefacts tels que le code, la documentation et les cas de
test. La transformation de modèle est un élément important de l’EDM puisqu’elle permet de
manipuler les représentations abstraites que sont les modèles. Les transformations de modèles,
comme d’autres programmes, sont sujettes à la fois à des erreurs syntaxiques et sémantiques.
La correction de ces erreurs est difficile et chronophage, car les transformations dépendent
du langage de transformation comme ATL et des langages de modélisation dans lesquels
sont exprimés les modèles en entrée et en sortie. Les travaux existants sur la réparation
des transformations ciblent les erreurs syntaxiques ou sémantiques, une erreur à la fois, et
définissent manuellement des patrons de correctifs. L’objectif principal de notre recherche
est de proposer un cadre générique pour corriger automatiquement de multiples erreurs
syntaxiques et sémantiques. Afin d’atteindre cet objectif, nous reformulons la réparation des
transformations de modèles comme un problème d’optimisation multiobjectif et le résolvons au
moyen d’algorithmes évolutionnaires. Pour adapter le cadre aux deux catégories d’erreurs, nous
utilisons différents types d’objectifs et des stratégies sophistiquées pour guider l’exploration
de l’espace des solutions.Model-driven engineering (MDE) is a software development paradigm that promotes the
use of models as first-class artifacts and automated processes to derive other artefacts from
them such as code, documentation and test cases. Model transformation is an important
element of MDE since it allows to manipulate the abstract representations that are models.
Model transformations, as other programs are subjects to both syntactic and semantic errors.
Fixing those errors is difficult and time consuming as the transformations depend on the
transformation language such as ATL, and modeling languages in which input and output
models are expressed. Existing work on transformation repair targets either syntactic or
semantic errors, one error at a time, and define patch templates manually. The main goal of
our research is to propose a generic framework to fix multiple syntactic and semantic errors
automatically. In order to achieve this goal, we reformulate the repair of model transformations
as a multi-objective optimization problem and solve it by means of evolutionary algorithms.
To adapt the framework to the two categories of errors, we use different types of objectives
and sophisticated strategies to guide the search
New Tool for Proliferation Resistance Evaluation Applied to Uranium and Thorium Fueled Fast Reactor Fuel Cycles
The comparison of nuclear facilities based on their barriers to nuclear material
proliferation has remained a difficult endeavor, often requiring expert elicitation for each
system under consideration. However, objectively comparing systems using a set of
computable metrics to derive a single number representing a system is not, in essence, a
nuclear nonproliferation specific problem and significant research has been performed
for business models. For instance, Multi-Attribute Utility Analysis (MAUA) methods
have been used previously to provide an objective insight of the barriers to proliferation.
In this paper, the Proliferation Resistance Analysis and Evaluation Tool for Observed
Risk (PRAETOR), a multi-tiered analysis tool based on the multiplicative MAUA
method, is presented. It folds sixty three mostly independent metrics over three levels of
detail to give an ultimate metric for nonproliferation performance comparison. In order
to reduce analysts' bias, the weighting between the various metrics was obtained by
surveying a total of thirty three nonproliferation specialists and nonspecialists from fields such as particle physics, international policy, and industrial engineering. The
PRAETOR was used to evaluate the Fast Breeder Reactor Fuel Cycle (FBRFC). The
results obtained using these weights are compared against a uniform weight approach.
Results are presented for five nuclear material diversion scenarios: four examples
include a diversion attempt on various components of a PUREX fast reactor cycle and
one scenario involves theft from a PUREX facility in a LWR cycle. The FBRFC was
evaluated with uranium-plutonium fuel and a second time using thorium-uranium fuel.
These diversion scenarios were tested with both uniform and expert weights, with and
without safeguards in place. The numerical results corroborate nonproliferation truths
and provide insight regarding fast reactor facilities' proliferation resistance in relation to
known standards
a case study
Computer simulations have become increasingly popular in many different areas over the years, owing mainly to more effective and cheaper machines. In many cases, the trend seems to be that computer simulations are replacing experiments, at least in areas in which experiments are very difficult (expensive) or impossible. One such area is that of attempting to foresee what will happen in the future. Such analyses are very important for a durable construction such as a repository for spent nuclear fuel, for example. In the modelling effort, several computer codes are used and input data are often used without scrutiny. However, this work shows that even the rather simple task of calculating the solubility of a solid phase in a given water is encumbered with the effects of different uncertainties. These uncertainties may make the calculated solubility vary by several orders of magnitude. Thus the input to the more complex
codes, simulating processes in connection with the repository, will also be affected.
This report presents some computer programs for uncertainty and sensitivity analysis of
solubility calculations. They are then illustrated by numerical simulations and estimation of uncertainty intervals for a case at the Äspö site in Sweden. Some of the input data treated as uncertain parameters are the stability constants for the reactions between the metal ion concerned and the elements present in the selected water or the rock. Stability constants and the enthalpies and entropies of reaction for the thoriumwater-acetylacetone-phosphate system have been determined experimentally. In addition to the values determined for these entities, uncertainty intervals are also estimated. A complexing mechanism for the thorium-phosphates at pH 8 is also suggested.researc
A Pattern-based Foundation for Language-Driven Software Engineering
This work brings together two fundamental ideas for modelling, programming and analysing software systems. The first idea is of a methodological nature: engineering software by systematically creating and relating languages. The second idea is of a technical nature: using patterns as a practical foundation for computing. The goal is to show that the systematic creation and layering of languages can be reduced to the elementary operations of pattern matching and instantiation and that this pattern-based approach provides a formal and practical foundation for language-driven modelling, programming and analysis.
The underpinning of the work is a novel formalism for recognising, deconstructing, creating, searching, transforming and generally manipulating data structures. The formalism is based on typed sequences, a generic structure for representing trees. It defines basic pattern expressions for matching and instantiating atomic values and variables. Horizontal, vertical, diagonal and hierarchical operators are different ways of combining patterns. Transformations combine matching and instantiating patterns and they are patterns themselves. A quasiquotation mechanism allows arbitrary levels of meta-pattern functionality and forms the basis of pattern abstraction. Path polymorphic operators are used to specify fine-grained search of structures. A range of core concepts such as layering, parsing and pattern-based computing can naturally be defined through pattern expressions.
Three language-driven tools that utilise the pattern formalism showcase the applicability of the pattern-approach. Concat is a self-sustaining (meta-)programming system in which all computations are expressed by matching and instantiation. This includes parsing, executing and optimising programs. By applying its language engineering tools to its own meta-language, Concat can extend itself from within. XMF (XML Modeling Framework) is a browser-based modelling- and meta-modelling framework that provides flexible means to create and relate modelling languages and to query and validate models. The pattern functionality that makes this possible is partly exposed as a schema language and partly as a JavaScript library. CFR (Channel Filter Rule Language) implements a language-driven approach for layered analysis of communication in complex networked systems. The communication on each layer is visible in the language of an “abstract protocol” that is defined by communication patterns
The synthesis of estuarine bathymetry from sparse sounding data
The two aims of the project involved:
1. Devising a system for prediction o f areas of bathymetric change within the Fal
estuary
2. Formulating and evaluating a method for interpolating single beam acoustic
bathymetry to avoid artefacts o f interpolation.
In order to address these aims, sources of bathymetric data for the Fal estuary were
identified as Truro Harbour Office, Cornwall County Council and the Environment
Agency. The data collected from these sources included red wavelength Lidar, aerial
photography and single beam acoustic bathymetry from a number of different years.
These data were input into a Geographic Information System (GIS) and assessed for
suitability for the purposes o f data comparison and hence assessment of temporal
trends in bathymetry within the estuary
Problems encountered during mterpolation of the acoustic bathymetry resulted in the
later aim of the project, to formulate an interpolation system suitable for interpolation
of the single beam, bathymetric data in a realistic way, avoiding serious artefacts of
interpolation. This aim was met, successfully, through the following processes:
1. An interpolation system was developed, using polygonal zones, bounded by
channels and coastlines, to prevent interpolation across these boundaries. This
system, based on Inverse Distance Weighting (IDW) interpolation, was referred to
as Zoned Inverse Distance Weighting (ZIDW).
2. ZIDW was found, by visual inspection, to eliminate the interpolation artefacts
described above.
3. The processes of identification of sounding lines and charmels, and the allocation
of soundings and output grid cells to polygons, were successfully automated to
allow ZIDW to be applied to large and multiple data sets. Manual intervention
was maintained for processes performed most successfully by the human brain to
optimise the results o f ZIDW.
4. To formalise the theory of ZIDW it was applied to a range of idealised,
mathematically defined chaimels. For simple straight and regular curved,
mathematical channels interpolation by the standard TIN method was found to
perform as well as ZIDW.
5. Investigation of sinusoidal channels within a rectangular estuary, however,
revealed that the TIN method begins to produce serious interpolation artefacts
where sounding lines are not parallel to the centre lines o f channels and ridges.
Hence, overall ZIDW was determined mathematically to represent the optimum
method o f interpolation for single beam, bathymelric data.
6. Finally, ZIDW was refined, using data from the Humber and Gironde estuaries, to
achieve universal applicability for interpolation of single beam, echo soimding
data from any estuary.
7. The refinements involved allowance for non-continuous, flood and ebb type
charmels; consideration of the effects of the scale of the estuary; smoothing of the
channels using cubic splines; interpolation using a 'smart' ellipse and the option to
reconstruct sounding lines from data that had previously been re-ordered
A pattern-based foundation for language-driven software engineering
This work brings together two fundamental ideas for modelling, programming and analysing software systems. The first idea is of a methodological nature: engineering software by systematically creating and relating languages. The second idea is of a technical nature: using patterns as a practical foundation for computing. The goal is to show that the systematic creation and layering of languages can be reduced to the elementary operations of pattern matching and instantiation and that this pattern-based approach provides a formal and practical foundation for language-driven modelling, programming and analysis. The underpinning of the work is a novel formalism for recognising, deconstructing, creating, searching, transforming and generally manipulating data structures. The formalism is based on typed sequences, a generic structure for representing trees. It defines basic pattern expressions for matching and instantiating atomic values and variables. Horizontal, vertical, diagonal and hierarchical operators are different ways of combining patterns. Transformations combine matching and instantiating patterns and they are patterns themselves. A quasiquotation mechanism allows arbitrary levels of meta-pattern functionality and forms the basis of pattern abstraction. Path polymorphic operators are used to specify fine-grained search of structures. A range of core concepts such as layering, parsing and pattern-based computing can naturally be defined through pattern expressions. Three language-driven tools that utilise the pattern formalism showcase the applicability of the pattern-approach. Concat is a self-sustaining (meta-)programming system in which all computations are expressed by matching and instantiation. This includes parsing, executing and optimising programs. By applying its language engineering tools to its own meta-language, Concat can extend itself from within. XMF (XML Modeling Framework) is a browser-based modelling- and meta-modelling framework that provides flexible means to create and relate modelling languages and to query and validate models. The pattern functionality that makes this possible is partly exposed as a schema language and partly as a JavaScript library. CFR (Channel Filter Rule Language) implements a language-driven approach for layered analysis of communication in complex networked systems. The communication on each layer is visible in the language of an “abstract protocol” that is defined by communication patterns.EThOS - Electronic Theses Online ServiceGBUnited Kingdo