243,541 research outputs found
Guidelines for the Specification and Design of Large-Scale Semantic Applications
This paper presents a set of guidelines to help software engineers with the specification and design of large-scale semantic applications by defining new processes for Requirements Engineering and Design for semantic applications. To facilitate its use to software engineers not experts in semantic technologies, several techniques are provided, namely, a characterization of large-scale semantic applications, common use cases that appear when developing this type of application, and a set of architectural patterns that can be used for modelling the architecture of semantic applications. The paper also presents an example of how these guidelines can be used and an evaluation of our contributions using the W3C Semantic Web use cases
A Double Classification of Common Pitfalls in Ontologies
The application of methodologies for building ontologies has improved the ontology quality. However, such a quality is not totally guaranteed because of the difficulties involved in ontology modelling. These difficulties are related to the inclusion of anomalies or worst practices in the modelling. In this context, our aim in this paper is twofold: (1) to provide a catalogue of common worst practices, which we call pitfalls, and (2) to present a double classification of such pitfalls. These two products will serve in the ontology development in two ways: (a) to avoid the appearance of pitfalls in the ontology modelling, and (b) to evaluate and correct ontologies to improve their quality
A method for rigorous development of fault-tolerant systems
PhD ThesisWith the rapid development of information systems and our increasing
dependency on computer-based systems, ensuring their dependability becomes
one the most important concerns during system development. This
is especially true for the mission and safety critical systems on which we
rely not to put signi cant resources and lives at risk.
Development of critical systems traditionally involves formal modelling
as a fault prevention mechanism. At the same time, systems typically
support fault tolerance mechanisms to mitigate runtime errors. However,
fault tolerance modelling and, in particular, rigorous de nitions of fault
tolerance requirements, fault assumptions and system recovery have not
been given enough attention during formal system development.
The main contribution of this research is in developing a method for
top-down formal design of fault tolerant systems. The re nement-based
method provides modelling guidelines presented in the following form:
a set of modelling principles for systematic modelling of fault tolerance,
a fault tolerance re nement strategy, and
a library of generic modelling patterns assisting in disciplined integration
of error detection and error recovery steps into models.
The method supports separation of normal and fault tolerant system behaviour
during modelling. It provides an environment for explicit modelling
of fault tolerance and modal aspects of system behaviour which
ensure rigour of the proposed development process.
The method is supported by tools that are smoothly integrated into an
industry-strength development environment.
The proposed method is demonstrated on two case studies. In particular,
the evaluation is carried out using a medium-scale industrial case study
from the aerospace domain.
The method is shown to provide support for explicit modelling of fault
tolerance, to reduce the development e orts during modelling, to support
reuse of fault tolerance modelling, and to facilitate adoption of formal
methods.DEPLOY:
The TrAmS Grant:
The School of Computing Science, Newcastle University
Feasibility of EPC to BPEL Model Transformations Based on Ontology and Patterns
Model-Driven Engineering holds the promise of transforming\ud
business models into code automatically. This requires the concept of\ud
model transformation. In this paper, we assess the feasibility of model\ud
transformations from Event-driven Process Chain models to Business\ud
Process Execution Language specifications. To this purpose, we use a\ud
framework based on ontological analysis and workflow patterns in order\ud
to predict the possibilities/limitations of such a model transformation.\ud
The framework is validated by evaluating the transformation of several\ud
models, including a real-life case.\ud
The framework indicates several limitations for transformation. Eleven\ud
guidelines and an approach to apply them provide methodological support\ud
to improve the feasibility of model transformation from EPC to\ud
BPEL
Recommended from our members
Generic unified modelling process for developing semantically rich, dynamic and temporal models
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a modelâs quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models
HIV treatment as prevention : models, data, and questions-towards evidence-based decision-making
Antiretroviral therapy (ART) for those infected with HIV can prevent onward transmission of infection, but biological efficacy alone is not enough to guide policy decisions about the role of ART in reducing HIV incidence. Epidemiology, economics, demography, statistics, biology, and mathematical modelling will be central in framing key decisions in the optimal use of ART. PLoS Medicine, with the HIV Modelling Consortium, has commissioned a set of articles that examine different aspects of HIV treatment as prevention with a forward-looking research agenda. Interlocking themes across these articles are discussed in this introduction. We hope that this article, and others in the collection, will provide a foundation upon which greater collaborations between disciplines will be formed, and will afford deeper insights into the key factors involved, to help strengthen the support for evidence-based decision-making in HIV prevention
Optimized Time Management for Declarative Workflows
Declarative process models are increasingly used since they fit better
with the nature of flexible process-aware information systems and the requirements
of the stakeholders involved. When managing business processes, in addition,
support for representing time and reasoning about it becomes crucial. Given
a declarative process model, users may choose among different ways to execute
it, i.e., there exist numerous possible enactment plans, each one presenting specific
values for the given objective functions (e.g., overall completion time). This
paper suggests a method for generating optimized enactment plans (e.g., plans
minimizing overall completion time) from declarative process models with explicit
temporal constraints. The latter covers a number of well-known workflow
time patterns. The generated plans can be used for different purposes like providing
personal schedules to users, facilitating early detection of critical situations,
or predicting execution times for process activities. The proposed approach is
applied to a range of test models of varying complexity. Although the optimization
of process execution is a highly constrained problem, results indicate that
our approach produces a satisfactory number of suitable solutions, i.e., solutions
optimal in many cases
Model-driven design, simulation and implementation of service compositions in COSMO
The success of software development projects to a large extent depends on the quality of the models that are produced in the development process, which in turn depends on the conceptual and practical support that is available for modelling, design and analysis. This paper focuses on model-driven support for service-oriented software development. In particular, it addresses how services and compositions of services can be designed, simulated and implemented. The support presented is part of a larger framework, called COSMO (COnceptual Service MOdelling). Whereas in previous work we reported on the conceptual support provided by COSMO, in this paper we proceed with a discussion of the practical support that has been developed. We show how reference models (model types) and guidelines (design steps) can be iteratively applied to design service compositions at a platform independent level and discuss what tool support is available for the design and analysis during this phase. Next, we present some techniques to transform a platform independent service composition model to an implementation in terms of BPEL and WSDL. We use the mediation scenario of the SWS challenge (concerning the establishment of a purchase order between two companies) to illustrate our application of the COSMO framework
Developing frameworks for protocol implementation
This paper presents a method to develop frameworks for protocol implementation. Frameworks are software structures developed for a specific application domain, which can be reused in the implementation of various different concrete systems in this domain. The use of frameworks support a protocol implementation process connected with formal design methods and produce an implementation code easy to extend and to reuse
- âŠ