5,011 research outputs found
Feature interaction in composed systems. Proceedings. ECOOP 2001 Workshop #08 in association with the 15th European Conference on Object-Oriented Programming, Budapest, Hungary, June 18-22, 2001
Feature interaction is nothing new and not limited to computer science. The problem of undesirable feature interaction (feature interaction problem) has already been investigated in the telecommunication domain. Our goal is the investigation of feature interaction in componet-based systems beyond telecommunication. This Technical Report embraces all position papers accepted at the ECOOP 2001 workshop no. 08 on "Feature Interaction in Composed Systems". The workshop was held on June 18, 2001 at Budapest, Hungary
Recommended from our members
From modularity to emergence: a primer on the design and science of complex systems
Electrical networks, flocking birds, transportation hubs, weather patterns, commercial organisations, swarming robots... Increasingly, many of the systems that we want to engineer or understand are said to be ‘complex’. These systems are often considered to be intractable because of their unpredictability, non-linearity, interconnectivity, heterarchy and ‘emergence’. Such attributes are often framed as a problem, but can also be exploited to encourage systems to efficiently exhibit intelligent, robust, self-organising behaviours. But what does it mean to describe systems as complex? How do these complex systems differ from the more easily understood ‘modular’ systems that we are familiar with? What are the underlying similarities between different systems, whether modular or complex? Answering these questions is a first step in approaching the design and science of complexity. However, to do so, it is necessary to look beyond the specifics of any particular system or field of study. We need to consider the fundamental nature of systems, looking for a common way to view ostensibly different phenomena.
This primer introduces a domain-neutral framework and diagrammatic scheme for characterising the ways in which systems are modular or complex. Rather than seeing modularity and complexity as inherent attributes of systems, we instead see them as ways in which those systems are characterised by those who are interested in them. The framework is not tied to any established mode of representation (e.g. networks, equations, formal modelling languages) nor to any domain-specific terminology (e.g. ‘vertex’, ‘eigenvector’, ‘entropy’). Instead, it consists of basic system constructs and three fundamental attributes of modular system architecture, namely structural encapsulation, function-structure mapping and interfacing. These constructs and attributes encourage more precise descriptions of different aspects of complexity (e.g. emergence, self-organisation, heterarchy). This allows researchers and practitioners from different disciplines to share methods, theories and findings related to the design and study of different systems, even when those systems appear superficially dissimilar
Architecture Analysis
This chapter also explains what the added value of enterprise architecture analysis techniques is in addition to existing, more detailed, and domain-specific ones for business processes or software, for example. Analogous to the idea of using the ArchiMate enterprise modelling language to integrate detailed design models, the chapter demonstrates that analysis, when considered at a global architectural level, can play a role in the integration of existing detailed techniques or of their results
Clafer: Lightweight Modeling of Structure, Behaviour, and Variability
Embedded software is growing fast in size and complexity, leading to intimate
mixture of complex architectures and complex control. Consequently, software
specification requires modeling both structures and behaviour of systems.
Unfortunately, existing languages do not integrate these aspects well, usually
prioritizing one of them. It is common to develop a separate language for each
of these facets. In this paper, we contribute Clafer: a small language that
attempts to tackle this challenge. It combines rich structural modeling with
state of the art behavioural formalisms. We are not aware of any other modeling
language that seamlessly combines these facets common to system and software
modeling. We show how Clafer, in a single unified syntax and semantics, allows
capturing feature models (variability), component models, discrete control
models (automata) and variability encompassing all these aspects. The language
is built on top of first order logic with quantifiers over basic entities (for
modeling structures) combined with linear temporal logic (for modeling
behaviour). On top of this semantic foundation we build a simple but expressive
syntax, enriched with carefully selected syntactic expansions that cover
hierarchical modeling, associations, automata, scenarios, and Dwyer's property
patterns. We evaluate Clafer using a power window case study, and comparing it
against other notations that substantially overlap with its scope (SysML, AADL,
Temporal OCL and Live Sequence Charts), discussing benefits and perils of using
a single notation for the purpose
Rethinking Nudge: Not One But Three Concepts
Nudge is a concept of policy intervention that originates in Thaler and Sunstein's (2008) popular eponymous book. Following their own hints, we distinguish three properties of nudge interventions: they redirect individual choices by only slightly altering choice conditions (here nudge 1), they use rationality failures instrumentally (here nudge 2), and they alleviate the unfavourable effects of these failures (here nudge 3). We explore each property in semantic detail and show that no entailment relation holds between them. This calls into question the theoretical unity of nudge, as intended by Thaler and Sunstein and most followers. We eventually recommend pursuing each property separately, both in policy research and at the foundational level. We particularly emphasize the need of reconsidering the respective roles of decision theory and behavioural economics to delineate nudge 2 correctly. The paper differs from most of the literature in focusing on the definitional rather than the normative problems of nudge
Approximate model composition for explanation generation
This thesis presents a framework for the formulation of knowledge models to sup¬
port the generation of explanations for engineering systems that are represented by the
resulting models. Such models are automatically assembled from instantiated generic
component descriptions, known as modelfragments. The model fragments are of suffi¬
cient detail that generally satisfies the requirements of information content as identified
by the user asking for explanations.
Through a combination of fuzzy logic based evidence preparation, which exploits the
history of prior user preferences, and an approximate reasoning inference engine, with
a Bayesian evidence propagation mechanism, different uncertainty sources can be han¬
dled. Model fragments, each representing structural or behavioural aspects of a com¬
ponent of the domain system of interest, are organised in a library. Those fragments
that represent the same domain system component, albeit with different representation
detail, form parts of the same assumption class in the library. Selected fragments are
assembled to form an overall system model, prior to extraction of any textual infor¬
mation upon which to base the explanations. The thesis proposes and examines the
techniques that support the fragment selection mechanism and the assembly of these
fragments into models.
In particular, a Bayesian network-based model fragment selection mechanism is de¬
scribed that forms the core of the work. The network structure is manually determined
prior to any inference, based on schematic information regarding the connectivity of
the components present in the domain system under consideration. The elicitation
of network probabilities, on the other hand is completely automated using probability
elicitation heuristics. These heuristics aim to provide the information required to select
fragments which are maximally compatible with the given evidence of the fragments
preferred by the user. Given such initial evidence, an existing evidence propagation
algorithm is employed. The preparation of the evidence for the selection of certain
fragments, based on user preference, is performed by a fuzzy reasoning evidence fab¬
rication engine. This engine uses a set of fuzzy rules and standard fuzzy reasoning
mechanisms, attempting to guess the information needs of the user and suggesting the selection of fragments of sufficient detail to satisfy such needs. Once the evidence
is propagated, a single fragment is selected for each of the domain system compo¬
nents and hence, the final model of the entire system is constructed. Finally, a highly
configurable XML-based mechanism is employed to extract explanation content from
the newly formulated model and to structure the explanatory sentences for the final
explanation that will be communicated to the user.
The framework is illustratively applied to a number of domain systems and is compared
qualitatively to existing compositional modelling methodologies. A further empirical
assessment of the performance of the evidence propagation algorithm is carried out to
determine its performance limits. Performance is measured against the number of frag¬
ments that represent each of the components of a large domain system, and the amount
of connectivity permitted in the Bayesian network between the nodes that stand for
the selection or rejection of these fragments. Based on this assessment recommenda¬
tions are made as to how the framework may be optimised to cope with real world
applications
Recommended from our members
Lessons learned: structuring knowledge codification and abstraction to provide meaningful information for learning
Purpose – To increase the spread and reuse of lessons learned (LLs), the purpose of this paper is to develop
a standardised information structure to facilitate concise capture of the critical elements needed to engage
secondary learners and help them apply lessons to their contexts.
Design/methodology/approach – Three workshops with industry practitioners, an analysis of over 60
actual lessons from private and public sector organisations and seven practitioner interviews provided
evidence of actual practice. Design science was used to develop a repeatable/consistent information model of
LL content/structure. Workshop analysis and theory provided the coding template. Situation theory and
normative analysis were used to define the knowledge and rule logic to standardise fields.
Findings – Comparing evidence from practice against theoretical prescriptions in the literature highlighted
important enhancements to the standard LL model. These were a consistent/concise rule and context
structure, appropriate emotional language, reuse and control criteria to ensure lessons were transferrable and
reusable in new situations.
Research limitations/implications – Findings are based on a limited sample. Long-term benefits of
standardisation and use need further research. A larger sample/longitudinal usage study is planned.
Practical implications – The implementation of the LL structure was well-received in one government
user site and other industry user sites are pending. Practitioners validated the design logic for improving
capture and reuse of lessons to render themeasily translatable to a new learner’s context.
Originality/value – The new LL structure is uniquely grounded in user needs, developed from existing
best practice and is an original application of normative and situation theory to provide consistent rule logic
for context/content structure
Supervisory controller synthesis for product lines using CIF 3
Using the CIF 3 toolset, we illustrate the general idea of controller synthesis for product line engineering for a prototypical example of a family of coffee machines. The challenge is to integrate a number of given components into a family of products such that the resulting behaviour is guaranteed to respect an attributed feature model as well as additional behavioural requirements. The proposed correctness-by-construction approach incrementally restricts the composed behaviour by subsequently incorporating feature constraints, attribute constraints and temporal constraints. The procedure as presented focusses on synthesis, but leaves ample opportunity to handle e.g. uncontrollable behaviour, dynamic reconfiguration, and product- and family-based analysis
- …