1,101 research outputs found
Converting Ontologies into DSLs
This paper presents a project whose main objective is to explore the Ontological-based development of Domain Specific Languages (DSL), more precisely, of their underlying Grammar.
After reviewing the basic concepts characterizing Ontologies and Domain-Specific Languages, we introduce a tool, Onto2Gra, that takes profit of the knowledge described by the ontology and automatically generates a grammar for a DSL that allows to discourse about the domain described by that ontology.
This approach represents a rigorous method to create, in a secure and effective way, a grammar for a new specialized language restricted to a concrete domain. The usual process of creating a grammar from the scratch is, as every creative action, difficult, slow and error prone; so this proposal is, from a Grammar Engineering point of view, of uttermost importance.
After the grammar generation phase, the Grammar Engineer can manipulate it to add syntactic sugar to improve the final language quality or even to add semantic actions.
The Onto2Gra project is composed of three engines. The main one is OWL2DSL, the component that converts an OWL ontology into an attribute grammar. The two additional modules are Onto2OWL, converts ontologies written in OntoDL (a light-weight DSL to describe ontologies) into standard OWL, and DDesc2OWL, converts domain instances written in the DSL generated by OWL2DSL into the initial OWL ontology
Automatic Test Generation for Space
The European Space Agency (ESA) uses an engine to perform tests in the Ground
Segment infrastructure, specially the Operational Simulator. This engine uses
many different tools to ensure the development of regression testing
infrastructure and these tests perform black-box testing to the C++ simulator
implementation. VST (VisionSpace Technologies) is one of the companies that
provides these services to ESA and they need a tool to infer automatically
tests from the existing C++ code, instead of writing manually scripts to
perform tests. With this motivation in mind, this paper explores automatic
testing approaches and tools in order to propose a system that satisfies VST
needs
Visualization/animation of programs in Alma: obtaining different results
Alma, a system for program animation, receives as input a computer program and produces a sequence of visualizations that will describe its functionality. The system generates automatically program animations basing this process on the internal representation of those programs. The back-end of this system works over at? execution tree (DAST Decorated Abstract Syntax Tree), implementing the animation algorithm. This algorithm uses two bases of rules: visualizing rules (to associate graphical representation with program elements creating a visual description of the program state) and rewriting rules (to change the program state).
In this paper the main goal will be to present the extensibility of the system in the sense of adding or modifying inputs and outputs. We also discuss the characteristics of Alma's architecture that make this possible.FC
Profile Detection Through Source Code Static Analysis
The present article reflects the progress of an ongoing master\u27s dissertation on language engineering. The main goal of the work here described, is to infer a programmer\u27s profile through the analysis of his source code. After such analysis the programmer shall be placed on a scale that characterizes him on his language abilities. There are several potential applications for such profiling, namely, the evaluation of a programmer\u27s skills and proficiency on a given language or the continuous evaluation of a student\u27s progress on a programming course. Throughout the course of this project and as a proof of concept, a tool that allows the automatic profiling of a Java programmer is under development. This tool is also introduced in the paper and its preliminary outcomes are discussed
Animação de algoritmos tornada sistemática
Neste artigo vamos propor a arquitectura do
sistema Alma, um sistema para animar algoritmos programados em diferentes
linguagens, oferecendo uma interface de
visualização controlável pelo utilizador. Assim, o que na nossa opinião caracteriza esta proposta é: a independência relativamente à aplicação e à linguagem de programação; a existência de uma linguagem de visualização versátil (permitindo adaptá-lo às necessidades do utilizador).
Como aplicações possíveis do Alma, destacamos: a animação de algoritmos, como apoio ao ensino da programação e como instrumento da didáctica da matemática; a análise de resposta, para apoio à correcção de provas de avaliação; a interpretação (visual) de documentos anotados
Probabilistic SynSet Based Concept Location
Concept location is a common task in program comprehension techniques, essential in many approaches used for software care and software evolution. An important goal of this process is to discover a mapping between source code and human oriented concepts.
Although programs are written in a strict and formal language, natural language terms and sentences like identifiers (variables or functions names), constant strings or comments, can still be found embedded in programs. Using terminology concepts and natural language processing techniques these terms can be exploited to discover clues about which real world concepts source code is addressing.
This work extends symbol tables build by compilers with ontology driven constructs, extends synonym sets defined by linguistics, with automatically created Probabilistic SynSets from software domain parallel corpora. And using a relational algebra, creates semantic bridges between program elements and human oriented concepts, to enhance concept location tasks
XML templates for constraints (XTC): um nível de abstracção para linguagens de especificação de restrições
Os DTDs permitem etiquetar um texto e validar a sua estrutura contra
uma gramática.
As linguagens de especificação de restrições (XML Constraint Specification
Languages), nomeadamente o XCSL, o Schematron e os XML-Schemas,
num nível mais elevado, já permitem validar aspectos não estruturais dos
documentos XML, tais como: relações entre elementos, ou atributos, pertencentes
a diferentes contextos; invariantes sobre modelos de dados; e
restrições ao valor dos elementos, ou atributos.
O sistema XCSL (XML Constraint Specification Language) nasceu no seio
do nosso grupo de investigação [7]. No entanto esta linguagem foi testada
em pé de igualdade com Schematron e XML-Schema. Usou-se um conjunto
considerável de casos de estudo para testar e comparar estas três
linguagens em termos: dos tipos de restrições especificáveis; da facilidade
de aprendizagem/utilização; da informação devolvida ao utilizador. Os
resultados mais significativos foram descritos em [3].
Fazendo esta comparação, apercebemo-nos que em cada linguagem e para
cada tipo de restrição há um texto fixo e um conjunto de partes variáveis,
sendo este último comum às várias linguagens. Tendo em conta estas partes
variáveis, criámos templates para cada tipo de restrição, em cada uma
das três linguagens. Com estes templates é possível gerar a especificação
de restrições, em qualquer uma daquelas linguagens, a partir de um conjunto
finito de parâmetros.
Neste artigo mostramos os templates para cada par tipo-de-restrição/ linguagem. A partir das partes comuns desses templates construímos
um conjunto de templates genéricos, designado XTC—XML Templates
for Constraints—, um para cada tipo de restrição independentemente da
linguagem escolhida. Com um documento XTC pode gerar-se todos os ficheiros
de especificação de restrições, ou seja, um ficheiro de especificação
para cada linguagem. Apresentamos, então, vários exemplos escritos em
XTC.
A implementação final usa aquilo a que chamamos sistema de folhas de
estilo XSL de terceira geração, três níveis de folhas de estilos. Com a
primeira folha de estilos (a do XTC) e o documento XTC geramos o documento
de especificação na linguagem pretendida; com este último e a
segunda folha de estilos (específica da linguagem pretendida) geramos a
terceira folha de estilos (documento com o qual já se vai poder validar a
semântica das várias instâncias); por fim, aplicamos esta última folha de
estilos aos vários documentos da família em estudo.
Terminamos o artigo mostrando como construímos esta arquitectura baseada
apenas em XML e XSL
Data flow analysis applied to optimize generic workflow problems
The compiler process, the one that transforms a program in a high level language into assembly or binary code, is a much elaborated process that mixes several powerful technologies, some of them developed specifically for this area. Nowadays, compilers are highly developed systems that can analyze and improve quite efficiently the source code, profiting from all the potential of the new processor architectures. This paper introduces a common type of analysis - the Data Flow Analysis – that is used to compute flow-sensitive information about programs, whose results are essential to produce many code optimizations. It is also argued that the problem of analyzing the data flow in software programs has many similarities with the problems found in industrial engineering; planning and management. As consequence, it is possible to apply analysis and optimization techniques used by compilers in these areas
DOLPHIN-FEW - An example of a Web system to analyze and study compilers behavior
DOLPHIN is a framework conceived to develop and test compiler components. DOLPHIN-FEW (Front-End for the Web) is the DOLPHIN module that integrates all Web-related functionalities. Initially conceived to monitor the behavior of some routines of the compilers back-end, it is, nowadays, also usable as a visual tool to teach how those code analysis, optimization, and code generation routines work. This paper introduces DOLPHIN-FEW, a software system that takes advantage of the web environment and associated technologies to be a powerful pedagogical tool to teach compiler construction topics
Applying compiler technology to solve generic
Compilers are tools that transform a high level programming languages into assem-
bly or binary code. The essential of the process is done by the interpretation and the
code generation steps, but nowadays most compilers have also a strong component
of code optimization, that explore as much as possible the potential of the computer
architectures to which the compiler must generate the code. These optimizations are
based on the information provided by several analysis processes. This paper present
some of these code analysis and optimizations, and shows how they can be used to
solve problems or improve the quality of solutions used at areas such as industrial
engineer and planning
- …
