8,513 research outputs found
Supporting fine-grained generative model-driven evolution
In the standard generative Model-driven Architecture (MDA), adapting the models of an existing system requires re-generation and restarting of that system. This is due to a strong separation between the modeling environment and the runtime environment. Certain current approaches remove this separation, allowing a system to be changed smoothly when the model changes. These approaches are, however, based on interpretation of modeling information rather than on generation, as in MDA. This paper describes an architecture that supports fine-grained evolution combined with generative model-driven development. Fine-grained changes are applied in a generative model-driven way to a system that has itself been developed in this way. To achieve this, model changes must be propagated correctly toward impacted elements. The impact of a model change flows along three dimensions: implementation, data (instances), and modeled dependencies. These three dimensions are explicitly represented in an integrated modeling-runtime environment to enable traceability. This implies a fundamental rethinking of MDA
Some issues in the 'archaeology' of software evolution
During a software project's lifetime, the software goes through many changes, as components are added, removed and modified to fix bugs and add new features. This paper is intended as a lightweight introduction to some of the issues arising from an `archaeological' investigation of software evolution. We use our own work to look at some of the challenges faced, techniques used, findings obtained, and lessons learnt when measuring and visualising the historical changes that happen during the evolution of software
Some issues in the 'archaeology' of software evolution
During a software project's lifetime, the software goes through many changes, as components are added, removed and modified to fix bugs and add new features. This paper is intended as a lightweight introduction to some of the issues arising from an `archaeological' investigation of software evolution. We use our own work to look at some of the challenges faced, techniques used, findings obtained, and lessons learnt when measuring and visualising the historical changes that happen during the evolution of software
Traceability for Model Driven, Software Product Line Engineering
Traceability is an important challenge for software organizations. This is true for traditional software development and even more so in new approaches that introduce more variety of artefacts such as Model Driven development or Software Product Lines. In this paper we look at some aspect of the interaction of Traceability, Model Driven development and Software Product Line
Integrating the common variability language with multilanguage annotations for web engineering
Web applications development involves managing a high diversity of files and resources like code, pages or style sheets, implemented in different languages. To deal with the automatic generation of
custom-made configurations of web applications, industry usually adopts annotation-based approaches even though the majority of studies encourage the use of composition-based approaches to implement
Software Product Lines. Recent work tries to combine both approaches to get the complementary benefits. However, technological companies are reticent to adopt new development paradigms
such as feature-oriented programming or aspect-oriented programming.
Moreover, it is extremely difficult, or even impossible, to apply
these programming models to web applications, mainly because of
their multilingual nature, since their development involves multiple
types of source code (Java, Groovy, JavaScript), templates (HTML,
Markdown, XML), style sheet files (CSS and its variants, such as
SCSS), and other files (JSON, YML, shell scripts). We propose to
use the Common Variability Language as a composition-based approach
and integrate annotations to manage fine grained variability
of a Software Product Line for web applications. In this paper, we (i)
show that existing composition and annotation-based approaches,
including some well-known combinations, are not appropriate to
model and implement the variability of web applications; and (ii)
present a combined approach that effectively integrates annotations
into a composition-based approach for web applications. We implement
our approach and show its applicability with an industrial
real-world system.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
Using the DiaSpec design language and compiler to develop robotics systems
A Sense/Compute/Control (SCC) application is one that interacts with the
physical environment. Such applications are pervasive in domains such as
building automation, assisted living, and autonomic computing. Developing an
SCC application is complex because: (1) the implementation must address both
the interaction with the environment and the application logic; (2) any
evolution in the environment must be reflected in the implementation of the
application; (3) correctness is essential, as effects on the physical
environment can have irreversible consequences. The SCC architectural pattern
and the DiaSpec domain-specific design language propose a framework to guide
the design of such applications. From a design description in DiaSpec, the
DiaSpec compiler is capable of generating a programming framework that guides
the developer in implementing the design and that provides runtime support. In
this paper, we report on an experiment using DiaSpec (both the design language
and compiler) to develop a standard robotics application. We discuss the
benefits and problems of using DiaSpec in a robotics setting and present some
changes that would make DiaSpec a better framework in this setting.Comment: DSLRob'11: Domain-Specific Languages and models for ROBotic systems
(2011
Kevoree Modeling Framework (KMF): Efficient modeling techniques for runtime use
The creation of Domain Specific Languages(DSL) counts as one of the main
goals in the field of Model-Driven Software Engineering (MDSE). The main
purpose of these DSLs is to facilitate the manipulation of domain specific
concepts, by providing developers with specific tools for their domain of
expertise. A natural approach to create DSLs is to reuse existing modeling
standards and tools. In this area, the Eclipse Modeling Framework (EMF) has
rapidly become the defacto standard in the MDSE for building Domain Specific
Languages (DSL) and tools based on generative techniques. However, the use of
EMF generated tools in domains like Internet of Things (IoT), Cloud Computing
or Models@Runtime reaches several limitations. In this paper, we identify
several properties the generated tools must comply with to be usable in other
domains than desktop-based software systems. We then challenge EMF on these
properties and describe our approach to overcome the limitations. Our approach,
implemented in the Kevoree Modeling Framework (KMF), is finally evaluated
according to the identified properties and compared to EMF.Comment: ISBN 978-2-87971-131-7; N° TR-SnT-2014-11 (2014
DiAMoNDBack: Diffusion-denoising Autoregressive Model for Non-Deterministic Backmapping of C{\alpha} Protein Traces
Coarse-grained molecular models of proteins permit access to length and time
scales unattainable by all-atom models and the simulation of processes that
occur on long-time scales such as aggregation and folding. The reduced
resolution realizes computational accelerations but an atomistic representation
can be vital for a complete understanding of mechanistic details. Backmapping
is the process of restoring all-atom resolution to coarse-grained molecular
models. In this work, we report DiAMoNDBack (Diffusion-denoising Autoregressive
Model for Non-Deterministic Backmapping) as an autoregressive denoising
diffusion probability model to restore all-atom details to coarse-grained
protein representations retaining only C{\alpha} coordinates. The
autoregressive generation process proceeds from the protein N-terminus to
C-terminus in a residue-by-residue fashion conditioned on the C{\alpha} trace
and previously backmapped backbone and side chain atoms within the local
neighborhood. The local and autoregressive nature of our model makes it
transferable between proteins. The stochastic nature of the denoising diffusion
process means that the model generates a realistic ensemble of backbone and
side chain all-atom configurations consistent with the coarse-grained C{\alpha}
trace. We train DiAMoNDBack over 65k+ structures from Protein Data Bank (PDB)
and validate it in applications to a hold-out PDB test set,
intrinsically-disordered protein structures from the Protein Ensemble Database
(PED), molecular dynamics simulations of fast-folding mini-proteins from DE
Shaw Research, and coarse-grained simulation data. We achieve state-of-the-art
reconstruction performance in terms of correct bond formation, avoidance of
side chain clashes, and diversity of the generated side chain configurational
states. We make DiAMoNDBack model publicly available as a free and open source
Python package
Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning
[EN] Data analysis is a key process to foster knowledge generation in particular domains
or fields of study. With a strong informative foundation derived from the analysis of
collected data, decision-makers can make strategic choices with the aim of obtaining
valuable benefits in their specific areas of action. However, given the steady growth
of data volumes, data analysis needs to rely on powerful tools to enable knowledge
extraction.
Information dashboards offer a software solution to analyze large volumes of
data visually to identify patterns and relations and make decisions according to the
presented information. But decision-makers may have different goals and,
consequently, different necessities regarding their dashboards. Moreover, the variety
of data sources, structures, and domains can hamper the design and implementation
of these tools.
This Ph.D. Thesis tackles the challenge of improving the development process of
information dashboards and data visualizations while enhancing their quality and
features in terms of personalization, usability, and flexibility, among others.
Several research activities have been carried out to support this thesis. First, a
systematic literature mapping and review was performed to analyze different
methodologies and solutions related to the automatic generation of tailored
information dashboards. The outcomes of the review led to the selection of a modeldriven
approach in combination with the software product line paradigm to deal with
the automatic generation of information dashboards.
In this context, a meta-model was developed following a domain engineering
approach. This meta-model represents the skeleton of information dashboards and
data visualizations through the abstraction of their components and features and has
been the backbone of the subsequent generative pipeline of these tools.
The meta-model and generative pipeline have been tested through their
integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully
integrated with other meta-model to support knowledge generation in learning
ecosystems, and as a framework to conceptualize and instantiate information
dashboards in different domains.
In terms of the practical applications, the focus has been put on how to transform
the meta-model into an instance adapted to a specific context, and how to finally
transform this later model into code, i.e., the final, functional product. These practical
scenarios involved the automatic generation of dashboards in the context of a Ph.D.
Programme, the application of Artificial Intelligence algorithms in the process, and
the development of a graphical instantiation platform that combines the meta-model
and the generative pipeline into a visual generation system.
Finally, different case studies have been conducted in the employment and
employability, health, and education domains. The number of applications of the
meta-model in theoretical and practical dimensions and domains is also a result itself.
Every outcome associated to this thesis is driven by the dashboard meta-model, which
also proves its versatility and flexibility when it comes to conceptualize, generate, and
capture knowledge related to dashboards and data visualizations
Specifying information dashboards’ interactive features through meta-model instantiation
[EN]Information dashboards1 can be leveraged to make informed decisions with the goal of improving policies, processes, and results in different contexts. However, the design process of these tools can be convoluted, given the variety
of profiles that can be involved in decision-making processes. The educative context
is one of the contexts that can benefit from the use of information dashboards,
but given the diversity of actors within this area (teachers, managers, students,
researchers, etc.), it is necessary to take into account different factors to deliver
useful and effective tools. This work describes an approach to generate information
dashboards with interactivity capabilities in different contexts through
meta-modeling. Having the possibility of specifying interaction patterns within
the generative workflow makes the personalization process more fine-grained,
allowing to match very specific requirements from the user. An example of application
within the context of Learning Analytics is presented to demonstrate the
viability of this approach
- …