23,344 research outputs found

    Extending and Relating Semantic Models of Compensating CSP

    No full text
    Business transactions involve multiple partners coordinating and interacting with each other. These transactions have hierarchies of activities which need to be orchestrated. Usual database approaches (e.g.,checkpoint, rollback) are not applicable to handle faults in a long running transaction due to interaction with multiple partners. The compensation mechanism handles faults that can arise in a long running transaction. Based on the framework of Hoare's CSP process algebra, Butler et al introduced Compensating CSP (cCSP), a language to model long-running transactions. The language introduces a method to declare a transaction as a process and it has constructs for orchestration of compensation. Butler et al also defines a trace semantics for cCSP. In this thesis, the semantic models of compensating CSP are extended by defining an operational semantics, describing how the state of a program changes during its execution. The semantics is encoded into Prolog to animate the specification. The semantic models are further extended to define the synchronisation of processes. The notion of partial behaviour is defined to model the behaviour of deadlock that arises during process synchronisation. A correspondence relationship is then defined between the semantic models and proved by using structural induction. Proving the correspondence means that any of the presentation can be accepted as a primary definition of the meaning of the language and each definition can be used correctly at different times, and for different purposes. The semantic models and their relationships are mechanised by using the theorem prover PVS. The semantic models are embedded in PVS by using Shallow embedding. The relationships between semantic models are proved by mutual structural induction. The mechanisation overcomes the problems in hand proofs and improves the scalability of the approach

    Modelling electronic service systems using UML

    Get PDF
    This paper presents a profile for modelling systems of electronic services using UML. Electronic services encapsulate business services, an organisational unit focused on delivering benefit to a consumer, to enhance communication, coordination and information management. Our profile is based on a formal, workflow-oriented description of electronic services that is abstracted from particular implementation technologies. Resulting models provide the basis for a formal analysis to verify behavioural properties of services. The models can also relate services to management components, including workflow managers and Electronic Service Management Systems (ESMSs), a novel concept drawn from experience of HP Service Composer and DySCo (Dynamic Service Composer), providing the starting point for integration and implementation tasks. Their UML basis and platform-independent nature is consistent with a Model-Driven Architecture (MDA) development strategy, appropriate to the challenge of developing electronic service systems using heterogeneous technology, and incorporating legacy systems

    Philosophy of Blockchain Technology - Ontologies

    Get PDF
    About the necessity and usefulness of developing a philosophy specific to the blockchain technology, emphasizing on the ontological aspects. After an Introduction that highlights the main philosophical directions for this emerging technology, in Blockchain Technology I explain the way the blockchain works, discussing ontological development directions of this technology in Designing and Modeling. The next section is dedicated to the main application of blockchain technology, Bitcoin, with the social implications of this cryptocurrency. There follows a section of Philosophy in which I identify the blockchain technology with the concept of heterotopia developed by Michel Foucault and I interpret it in the light of the notational technology developed by Nelson Goodman as a notational system. In the Ontology section, I present two developmental paths that I consider important: Narrative Ontology, based on the idea of order and structure of history transmitted through Paul Ricoeur's narrative history, and the Enterprise Ontology system based on concepts and models of an enterprise, specific to the semantic web, and which I consider to be the most well developed and which will probably become the formal ontological system, at least in terms of the economic and legal aspects of blockchain technology. In Conclusions I am talking about the future directions of developing the blockchain technology philosophy in general as an explanatory and robust theory from a phenomenologically consistent point of view, which allows testability and ontologies in particular, arguing for the need of a global adoption of an ontological system for develop cross-cutting solutions and to make this technology profitable. CONTENTS: Abstract Introducere Tehnologia blockchain - Proiectare - Modele Bitcoin Filosofia Ontologii - Ontologii narative - Ontologii de intreprindere Concluzii Note Bibliografie DOI: 10.13140/RG.2.2.24510.3360

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Coordinating Large Distributed Relational Process Structures

    Get PDF
    Representing a business process as a collaboration of interacting processes has become feasible with the emergence of data-centric business process management paradigms. Usually, these interacting processes have relations and, thereby, form a complex relational process structure. The interactions of processes within this relational process structure need to be coordinated to arrive at a meaningful overall business goal. However, relational process structures may become arbitrarily large. With the use of cloud technology, they may additionally be distributed over multiple nodes, allowing for scalability. Coordination processes have been proposed to coordinate relational process structures, where processes may have one-to-many and many-to-many relations at run-time. This paper shows how multiple coordination processes can be used in a decentralized fashion to more efficiently coordinate large, distributed process structures. The main challenge of using multiple coordination processes is to effectively realize the coordination responsibility of each coordination process. Key components of the solution are the subsidiary principle and the hierarchy of the relational process structure. Finally, an implementation of the coordination process concept based on microservices was developed, which allows for fast and concurrent enactment of multiple, decentralized coordination processes in large, distributed process structures

    Geospatial information infrastructures

    Get PDF
    Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Geospatial information infrastructures (GIIs) provide the technological, semantic,organizationalandlegalstructurethatallowforthediscovery,sharing,and use of geospatial information (GI). In this chapter, we introduce the overall concept and surrounding notions such as geographic information systems (GIS) and spatial datainfrastructures(SDI).WeoutlinethehistoryofGIIsintermsoftheorganizational andtechnologicaldevelopmentsaswellasthecurrentstate-of-art,andreïŹ‚ectonsome of the central challenges and possible future trajectories. We focus on the tension betweenincreasedneedsforstandardizationandtheever-acceleratingtechnological changes. We conclude that GIIs evolved as a strong underpinning contribution to implementation of the Digital Earth vision. In the future, these infrastructures are challengedtobecomeïŹ‚exibleandrobustenoughtoabsorbandembracetechnological transformationsandtheaccompanyingsocietalandorganizationalimplications.With this contribution, we present the reader a comprehensive overview of the ïŹeld and a solid basis for reïŹ‚ections about future developments

    An Approach for Modeling and Coordinating Process Interactions

    Get PDF
    In any enterprise, different entities collaborate to achieve common business objectives. The processes used to reach these objectives have relations and, therefore, depend on each other. Their proper coordination within a process-aware information system requires coping with heterogeneous granularity of processes, unclear process relations, and increased process model complexity due to the integration of coordination constraints into process models. This paper presents the concept of coordination processes, which constitute a means to handle the interactions between a multitude of interdependent processes running asynchronously to each other. Particularly, coordination processes leverage the clear identification of process relations, a defined granularity for processes, and the abstraction from details of the individual processes in order to provide a robust framework, enabling proper coordination support for interdependent processes

    Towards a Semantic-based Approach for Modeling Regulatory Documents in Building Industry

    Get PDF
    Regulations in the Building Industry are becoming increasingly complex and involve more than one technical area. They cover products, components and project implementation. They also play an important role to ensure the quality of a building, and to minimize its environmental impact. In this paper, we are particularly interested in the modeling of the regulatory constraints derived from the Technical Guides issued by CSTB and used to validate Technical Assessments. We first describe our approach for modeling regulatory constraints in the SBVR language, and formalizing them in the SPARQL language. Second, we describe how we model the processes of compliance checking described in the CSTB Technical Guides. Third, we show how we implement these processes to assist industrials in drafting Technical Documents in order to acquire a Technical Assessment; a compliance report is automatically generated to explain the compliance or noncompliance of this Technical Documents
    • 

    corecore