2,455 research outputs found
Investigating modularity in the analysis of process algebra models of biochemical systems
Compositionality is a key feature of process algebras which is often cited as
one of their advantages as a modelling technique. It is certainly true that in
biochemical systems, as in many other systems, model construction is made
easier in a formalism which allows the problem to be tackled compositionally.
In this paper we consider the extent to which the compositional structure which
is inherent in process algebra models of biochemical systems can be exploited
during model solution. In essence this means using the compositional structure
to guide decomposed solution and analysis.
Unfortunately the dynamic behaviour of biochemical systems exhibits strong
interdependencies between the components of the model making decomposed
solution a difficult task. Nevertheless we believe that if such decomposition
based on process algebras could be established it would demonstrate substantial
benefits for systems biology modelling. In this paper we present our
preliminary investigations based on a case study of the pheromone pathway in
yeast, modelling in the stochastic process algebra Bio-PEPA
The theory of international business: the role of economic models
This paper reviews the scope for economic modelling in international business studies. It argues for multi-level theory based on classic internalisation theory. It present a systems approach that encompasses both firm-level and industry-level analysis
An Empirical Study of Cohesion and Coupling: Balancing Optimisation and Disruption
Search based software engineering has been extensively applied to the problem of finding improved modular structures that maximise cohesion and minimise coupling. However, there has, hitherto, been no longitudinal study of developers’ implementations, over a series of sequential releases. Moreover, results validating whether developers respect the fitness functions are scarce, and the potentially disruptive effect of search-based remodularisation is usually overlooked. We present an empirical study of 233 sequential releases of 10 different systems; the largest empirical study reported in the literature so far, and the first longitudinal study. Our results provide evidence that developers do, indeed, respect the fitness functions used to optimise cohesion/coupling (they are statistically significantly better than arbitrary choices with p << 0.01), yet they also leave considerable room for further improvement (cohesion/coupling can be improved by 25% on average). However, we also report that optimising the structure is highly disruptive (on average more than 57% of the structure must change), while our results reveal that developers tend to avoid such disruption. Therefore, we introduce and evaluate a multi-objective evolutionary approach that minimises disruption while maximising cohesion/coupling improvement. This allows developers to balance reticence to disrupt existing modular structure, against their competing need to improve cohesion and coupling. The multi-objective approach is able to find modular structures that improve the cohesion of developers’ implementations by 22.52%, while causing an acceptably low level of disruption (within that already tolerated by developers)
Mirroring or misting: On the role of product architecture, product complexity, and the rate of product component change
This paper contributes to the literature on the within-firm and across-firm mirroring hypothesis – the assumed architectural mapping between firms’ strategic choices of product architecture and firm architecture, and between firms’ architectural choices and the industry structures that emerge. Empirical evidence is both limited and mixed and there is evidently a need for a more nuanced theory that embeds not only whether the mirroring hypothesis holds, but under what product architecture and component-level conditions it may or may not hold. We invoke an industrial economics perspective to develop a stylised product architecture typology and hypothesise how the combined effects of product architecture type, product complexity and the rate of product component change may be associated with phases of mirroring or misting. Our framework helps to reconcile much existing mixed evidence and provides the foundation for further empirical research
Session Communication and Integration
The scenario-based specification of a large distributed system is usually
naturally decomposed into various modules. The integration of specification
modules contrasts to the parallel composition of program components, and
includes various ways such as scenario concatenation, choice, and nesting. The
recent development of multiparty session types for process calculi provides
useful techniques to accommodate the protocol modularisation, by encoding
fragments of communication protocols in the usage of private channels for a
class of agents. In this paper, we extend forgoing session type theories by
enhancing the session integration mechanism. More specifically, we propose a
novel synchronous multiparty session type theory, in which sessions are
separated into the communicating and integrating levels. Communicating sessions
record the message-based communications between multiple agents, whilst
integrating sessions describe the integration of communicating ones. A
two-level session type system is developed for pi-calculus with syntactic
primitives for session establishment, and several key properties of the type
system are studied. Applying the theory to system description, we show that a
channel safety property and a session conformance property can be analysed.
Also, to improve the utility of the theory, a process slicing method is used to
help identify the violated sessions in the type checking.Comment: A short version of this paper is submitted for revie
Towards modular verification of pathways: fairness and assumptions
Modular verification is a technique used to face the state explosion problem
often encountered in the verification of properties of complex systems such as
concurrent interactive systems. The modular approach is based on the
observation that properties of interest often concern a rather small portion of
the system. As a consequence, reduced models can be constructed which
approximate the overall system behaviour thus allowing more efficient
verification.
Biochemical pathways can be seen as complex concurrent interactive systems.
Consequently, verification of their properties is often computationally very
expensive and could take advantage of the modular approach.
In this paper we report preliminary results on the development of a modular
verification framework for biochemical pathways. We view biochemical pathways
as concurrent systems of reactions competing for molecular resources. A modular
verification technique could be based on reduced models containing only
reactions involving molecular resources of interest.
For a proper description of the system behaviour we argue that it is
essential to consider a suitable notion of fairness, which is a
well-established notion in concurrency theory but novel in the field of pathway
modelling. We propose a modelling approach that includes fairness and we
identify the assumptions under which verification of properties can be done in
a modular way.
We prove the correctness of the approach and demonstrate it on the model of
the EGF receptor-induced MAP kinase cascade by Schoeberl et al.Comment: In Proceedings MeCBIC 2012, arXiv:1211.347
Event-B in the Institutional Framework: Defining a Semantics, Modularisation Constructs and Interoperability for a Specification Language
Event-B is an industrial-strength specification language for verifying
the properties of a given system’s specification. It is supported by its
Eclipse-based IDE, Rodin, and uses the process of refinement to model
systems at different levels of abstraction. Although a mature formalism,
Event-B has a number of limitations. In this thesis, we demonstrate that
Event-B lacks formally defined modularisation constructs. Additionally,
interoperability between Event-B and other formalisms has been
achieved in an ad hoc manner. Moreover, although a formal language,
Event-B does not have a formal semantics. We address each of these
limitations in this thesis using the theory of institutions.
The theory of institutions provides a category-theoretic way of representing
a formalism. Formalisms that have been represented as institutions
gain access to an array of generic specification-building operators
that can be used to modularise specifications in a formalismindependent
manner. In the theory of institutions, there are constructs
(known as institution (co)morphisms) that provide us with the facility to
create interoperability between formalisms in a mathematically sound
way.
The main contribution of this thesis is the definition of an institution
for Event-B, EVT, which allows us to address its identified limitations.
To this end, we formally define a translational semantics from Event-
B to EVT. We show how specification-building operators can provide
a unified set of modularisation constructs for Event-B. In fact, the institutional
framework that we have incorporated Event-B into is more
accommodating to modularisation than the current state-of-the-art for
Rodin. Furthermore, we present institution morphisms that facilitate interoperability between the respective institutions for Event-B and UML.
This approach is more generic than the current approach to interoperability
for Event-B and in fact, allows access to any formalism or logic
that has already been defined as an institution. Finally, by defining
EVT, we have outlined the steps required in order to include similar
formalisms into the institutional framework. Hence, this thesis acts as a
template for defining an institution for a specification language
A foundation for ontology modularisation
There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques
Building Specifications in the Event-B Institution
This paper describes a formal semantics for the Event-B specification
language using the theory of institutions. We define an institution for
Event-B, EVT, and prove that it meets the validity requirements for
satisfaction preservation and model amalgamation. We also present a series of
functions that show how the constructs of the Event-B specification language
can be mapped into our institution. Our semantics sheds new light on the
structure of the Event-B language, allowing us to clearly delineate three
constituent sub-languages: the superstructure, infrastructure and mathematical
languages. One of the principal goals of our semantics is to provide access to
the generic modularisation constructs available in institutions, including
specification-building operators for parameterisation and refinement. We
demonstrate how these features subsume and enhance the corresponding features
already present in Event-B through a detailed study of their use in a worked
example. We have implemented our approach via a parser and translator for
Event-B specifications, EBtoEVT, which also provides a gateway to the Hets
toolkit for heterogeneous specification.Comment: 54 pages, 25 figure
- …