18,417 research outputs found
trackr: A Framework for Enhancing Discoverability and Reproducibility of Data Visualizations and Other Artifacts in R
Research is an incremental, iterative process, with new results relying and
building upon previous ones. Scientists need to find, retrieve, understand, and
verify results in order to confidently extend them, even when the results are
their own. We present the trackr framework for organizing, automatically
annotating, discovering, and retrieving results. We identify sources of
automatically extractable metadata for computational results, and we define an
extensible system for organizing, annotating, and searching for results based
on these and other metadata. We present an open-source implementation of these
concepts for plots, computational artifacts, and woven dynamic reports
generated in the R statistical computing language
Verification of Agent-Based Artifact Systems
Artifact systems are a novel paradigm for specifying and implementing
business processes described in terms of interacting modules called artifacts.
Artifacts consist of data and lifecycles, accounting respectively for the
relational structure of the artifacts' states and their possible evolutions
over time. In this paper we put forward artifact-centric multi-agent systems, a
novel formalisation of artifact systems in the context of multi-agent systems
operating on them. Differently from the usual process-based models of services,
the semantics we give explicitly accounts for the data structures on which
artifact systems are defined. We study the model checking problem for
artifact-centric multi-agent systems against specifications written in a
quantified version of temporal-epistemic logic expressing the knowledge of the
agents in the exchange. We begin by noting that the problem is undecidable in
general. We then identify two noteworthy restrictions, one syntactical and one
semantical, that enable us to find bisimilar finite abstractions and therefore
reduce the model checking problem to the instance on finite models. Under these
assumptions we show that the model checking problem for these systems is
EXPSPACE-complete. We then introduce artifact-centric programs, compact and
declarative representations of the programs governing both the artifact system
and the agents. We show that, while these in principle generate infinite-state
systems, under natural conditions their verification problem can be solved on
finite abstractions that can be effectively computed from the programs. Finally
we exemplify the theoretical results of the paper through a mainstream
procurement scenario from the artifact systems literature
Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis
Even with impressive advances in automated formal methods, certain problems
in system verification and synthesis remain challenging. Examples include the
verification of quantitative properties of software involving constraints on
timing and energy consumption, and the automatic synthesis of systems from
specifications. The major challenges include environment modeling,
incompleteness in specifications, and the complexity of underlying decision
problems.
This position paper proposes sciduction, an approach to tackle these
challenges by integrating inductive inference, deductive reasoning, and
structure hypotheses. Deductive reasoning, which leads from general rules or
concepts to conclusions about specific problem instances, includes techniques
such as logical inference and constraint solving. Inductive inference, which
generalizes from specific instances to yield a concept, includes algorithmic
learning from examples. Structure hypotheses are used to define the class of
artifacts, such as invariants or program fragments, generated during
verification or synthesis. Sciduction constrains inductive and deductive
reasoning using structure hypotheses, and actively combines inductive and
deductive reasoning: for instance, deductive techniques generate examples for
learning, and inductive reasoning is used to guide the deductive engines.
We illustrate this approach with three applications: (i) timing analysis of
software; (ii) synthesis of loop-free programs, and (iii) controller synthesis
for hybrid systems. Some future applications are also discussed
Building a Flexible Software Factory Using Partial Domain Specific Models
This paper describes some experiences in building a software factory by defining multiple small domain specific languages (DSLs) and having multiple small models per DSL. This is in high contrast with traditional approaches using
monolithic models, e.g. written in UML. In our approach, models behave like source code to a large extend, leading to an easy way to manage the model(s) of large systems
Keeping Continuous Deliveries Safe
Allowing swift release cycles, Continuous Delivery has become popular in
application software development and is starting to be applied in
safety-critical domains such as the automotive industry. These domains require
thorough analysis regarding safety constraints, which can be achieved by formal
verification and the execution of safety tests resulting from a safety analysis
on the product. With continuous delivery in place, such tests need to be
executed with every build to ensure the latest software still fulfills all
safety requirements. Even more though, the safety analysis has to be updated
with every change to ensure the safety test suite is still up-to-date. We thus
propose that a safety analysis should be treated no differently from other
deliverables such as source-code and dependencies, formulate guidelines on how
to achieve this and advert areas where future research is needed.Comment: 4 pages, 3 figure
Applying model-driven paradigm: CALIPSOneo experience
Model-Driven Engineering paradigm is being used by the research community in the last years, obtaining suitable results. However, there are few practical experiences in the enterprise field. This paper presents the use of this paradigm in an aeronautical PLM project named CALIPSOneo currently under development in Airbus. In this context, NDT methodology was adapted as methodology in order to be used by the development team. The paper presents this process and the results that we are getting from the project. Besides, some relevant learned lessons from the trenches are concluded.Ministerio de Ciencia e Innovación TIN2010-20057-C03-02Junta de Andalucía TIC-578
Integrating the common variability language with multilanguage annotations for web engineering
Web applications development involves managing a high diversity of files and resources like code, pages or style sheets, implemented in different languages. To deal with the automatic generation of
custom-made configurations of web applications, industry usually adopts annotation-based approaches even though the majority of studies encourage the use of composition-based approaches to implement
Software Product Lines. Recent work tries to combine both approaches to get the complementary benefits. However, technological companies are reticent to adopt new development paradigms
such as feature-oriented programming or aspect-oriented programming.
Moreover, it is extremely difficult, or even impossible, to apply
these programming models to web applications, mainly because of
their multilingual nature, since their development involves multiple
types of source code (Java, Groovy, JavaScript), templates (HTML,
Markdown, XML), style sheet files (CSS and its variants, such as
SCSS), and other files (JSON, YML, shell scripts). We propose to
use the Common Variability Language as a composition-based approach
and integrate annotations to manage fine grained variability
of a Software Product Line for web applications. In this paper, we (i)
show that existing composition and annotation-based approaches,
including some well-known combinations, are not appropriate to
model and implement the variability of web applications; and (ii)
present a combined approach that effectively integrates annotations
into a composition-based approach for web applications. We implement
our approach and show its applicability with an industrial
real-world system.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Dynamic Model-based Management of Service-Oriented Infrastructure.
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation
- …