401 research outputs found

    Actions and Events in Concurrent Systems Design

    Full text link
    In this work, having in mind the construction of concurrent systems from components, we discuss the difference between actions and events. For this discussion, we propose an(other) architecture description language in which actions and events are made explicit in the description of a component and a system. Our work builds from the ideas set forth by the categorical approach to the construction of software based systems from components advocated by Goguen and Burstall, in the context of institutions, and by Fiadeiro and Maibaum, in the context of temporal logic. In this context, we formalize a notion of a component as an element of an indexed category and we elicit a notion of a morphism between components as morphisms of this category. Moreover, we elaborate on how this formalization captures, in a convenient manner, the underlying structure of a component and the basic interaction mechanisms for putting components together. Further, we advance some ideas on how certain matters related to the openness and the compositionality of a component/system may be described in terms of classes of morphisms, thus potentially supporting a compositional rely/guarantee reasoning.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    An integrated framework utilising software agent reasoning and ontology models for sensor based building monitoring

    Get PDF
    Smart building monitoring demands a new software infrastructure that can elaborate building domain knowledge in order to provide advanced and intelligent functionalities. Conventional facility management (FM) software tools lack semantically rich components, and that limits the capability of supporting software for automatic information sharing, resource negotiation and to assist in timely decision making. Recent hardware innovation on compact ZigBee sensor devices, software developments on ontology and intelligent software agent paradigms provide a good opportunity to develop tools that can further improve current FM practices. This paper introduces an integrated framework which includes a ZigBee based sensor network and underlying multi-agent software (MAS) components. Several different types of sensors were integrated with the ZigBee host devices to produce compact multi-functional sensor units. The MAS framework incorporates the belief-desire-intention (BDI) abstraction with ontology support (provided via explicit knowledge bases). The different software agent types have been developed to work with sensor hardware to conduct resource negotiation, to optimize battery utilization, to monitor building space in a non-intrusive way and to reason about its usage through real time ontology model queries. The deployed sensor network shows promising intelligent characteristics, and it has been applied in several on-going research projects as an underlying decision making service. More applications and larger deployments have been planned for future work

    Policy implementation in higher education: an ideographic case study

    Get PDF
    The research presented in this study considers policy implementation from an ideographic basis. The study focuses on a planned implementation initiative to introduce a learning outcomes paradigm within a university to implement policy related to Bologna and the implementation of the Irish National Framework of Qualifications. By adopting an ideographic approach to policy this study suggests that policy is not a static conception, policy is made and remade as it is encoded, interpreted and actioned by implementers. A processual/contextualist perspective to implementation is applied within this study drawn from the literature of organisational change. The research focuses on considering how policy is implemented in practice by those at two levels on the implementation staircase within the institution. The study is, therefore, a traditional implementation study focusing on the how of implementation; the study does not evaluate the outcomes of evaluation against the objectives of the reform. An objective of this study was to complete an intrinsic case study within the researcher’s university in the Republic of Ireland as a piece of independent institutional research. The findings of this study include the development of a case which adds to the empirical research into the institutional implementation of Bologna. A further finding of this study, relates to the application of processual/contextualist perspective to the study of policy implementation. This study suggests that this perspective provides a constructive means by which an ideographic policy analysis can be conducted

    Category Theory and Model-Driven Engineering: From Formal Semantics to Design Patterns and Beyond

    Full text link
    There is a hidden intrigue in the title. CT is one of the most abstract mathematical disciplines, sometimes nicknamed "abstract nonsense". MDE is a recent trend in software development, industrially supported by standards, tools, and the status of a new "silver bullet". Surprisingly, categorical patterns turn out to be directly applicable to mathematical modeling of structures appearing in everyday MDE practice. Model merging, transformation, synchronization, and other important model management scenarios can be seen as executions of categorical specifications. Moreover, the paper aims to elucidate a claim that relationships between CT and MDE are more complex and richer than is normally assumed for "applied mathematics". CT provides a toolbox of design patterns and structural principles of real practical value for MDE. We will present examples of how an elementary categorical arrangement of a model management scenario reveals deficiencies in the architecture of modern tools automating the scenario.Comment: In Proceedings ACCAT 2012, arXiv:1208.430

    A logic for the stepwise development of reactive systems

    Get PDF
    D↓is a new dynamic logic combining regular modalities with the binder constructor typical of hybrid logic, which provides a smooth framework for the stepwise development of reactive systems. Actually, the logic is able to capture system properties at different levels of abstraction, from high-level safety and liveness requirements, to constructive specifications representing concrete processes. The paper discusses its semantics, given in terms of reachable transition systems with initial states, its expressive power and a proof system. The methodological framework is in debt to the landmark work of D.Sannella and A.Tarlecki, instantiating the generic concepts of constructor and abstractor implementations by standard operators on reactive components, e.g. relabelling and parallel composition, as constructors, and bisimulation for abstraction.This work was funded by ERDF European Regional Development Fund, through the COMPETE Programme, and by National Funds through FCT – Portuguese Foundation for Science and Technology – within projects POCI-01-0145-FEDER-016692 (DaLí – Dynamic logics for cyber-physical systems: towards contract based design) and UID/MAT/04106/2013 at CIDMA. Further support was given by the project SmartEGOV, NORTE-01-0145-FEDER000037, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the EFDR. The first author is also supported by a FCT individual grant SFRH/BPD/103004/201

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Multi-paradigm modelling for cyber–physical systems: a descriptive framework

    Get PDF
    The complexity of cyber–physical systems (CPSS) is commonly addressed through complex workflows, involving models in a plethora of different formalisms, each with their own methods, techniques, and tools. Some workflow patterns, combined with particular types of formalisms and operations on models in these formalisms, are used successfully in engineering practice. To identify and reuse them, we refer to these combinations of workflow and formalism patterns as modelling paradigms. This paper proposes a unifying (Descriptive) Framework to describe these paradigms, as well as their combinations. This work is set in the context of Multi-Paradigm Modelling (MPM), which is based on the principle to model every part and aspect of a system explicitly, at the most appropriate level(s) of abstraction, using the most appropriate modelling formalism(s) and workflows. The purpose of the Descriptive Framework presented in this paper is to serve as a basis to reason about these formalisms, workflows, and their combinations. One crucial part of the framework is the ability to capture the structural essence of a paradigm through the concept of a paradigmatic structure. This is illustrated informally by means of two example paradigms commonly used in CPS: Discrete Event Dynamic Systems and Synchronous Data Flow. The presented framework also identifies the need to establish whether a paradigm candidate follows, or qualifies as, a (given) paradigm. To illustrate the ability of the framework to support combining paradigms, the paper shows examples of both workflow and formalism combinations. The presented framework is intended as a basis for characterisation and classification of paradigms, as a starting point for a rigorous formalisation of the framework (allowing formal analyses), and as a foundation for MPM tool development
    • …
    corecore