10,395 research outputs found

    Analysis of Composite Web Services using Logging Facilities

    Get PDF
    Web services are becoming more and more complex, involving numerous interacting business objects within considerable processes. In order to fully explore Web service business opportunities while ensuring a correct and reliable modelling and execution, analyzing and tracking Web services interactions will enable them to be well understood and controlled. Then, given the resulting event log we want to verify certain specified properties, to provide knowledge about the context of and the reasons for discrepancies between services'behaviours and related instances. This paper advocates a novel technique to log composite Web services and a formal approach, based on an algeabric specification of the discrete event calculus language DEC, to check behavioural properties of composite Web services regarding their execution log. An automated induction-based theorem prover SPIKE is used as verification back-end

    Service-oriented coordination platform for technology-enhanced learning

    Get PDF
    It is currently difficult to coordinate learning processes, not only because multiple stakeholders are involved (such as students, teachers, administrative staff, technical staff), but also because these processes are driven by sophisticated rules (such as rules on how to provide learning material, rules on how to assess students’ progress, rules on how to share educational responsibilities). This is one of the reasons for the slow progress in technology-enhanced learning. Consequently, there is a clear demand for technological facilitation of the coordination of learning processes. In this work, we suggest some solution directions that are based on SOA (Service-Oriented Architecture). In particular, we propose a coordination service pattern consistent with SOA and based on requirements that follow from an analysis of both learning processes and potentially useful support technologies. We present the service pattern considering both functional and non-functional issues, and we address policy enforcement as well. Finally, we complement our proposed architecture-level solution directions with an example. The example illustrates our ideas and is also used to identify: (i) a short list of educational IT services; (ii) related non-functional concerns; they will be considered in future work

    A conceptual architecture for semantic web services development and deployment

    Get PDF
    Several extensions of the Web Services Framework (WSF) have been proposed. The combination with Semantic Web technologies introduces a notion of semantics, which can enhance scalability through automation. Service composition to processes is an equally important issue. Ontology technology – the core of the Semantic Web – can be the central building block of an extension endeavour. We present a conceptual architecture for ontology-based Web service development and deployment. The development of service-based software systems within the WSF is gaining increasing importance. We show how ontologies can integrate models, languages, infrastructure, and activities within this architecture to support reuse and composition of semantic Web services

    A framework for selecting workflow tools in the context of composite information systems

    Get PDF
    When an organization faces the need of integrating some workflow-related activities in its information system, it becomes necessary to have at hand some well-defined informational model to be used as a framework for determining the selection criteria onto which the requirements of the organization can be mapped. Some proposals exist that provide such a framework, remarkably the WfMC reference model, but they are designed to be appl icable when workflow tools are selected independently from other software, and departing from a set of well-known requirements. Often this is not the case: workflow facilities are needed as a part of the procurement of a larger, composite information syste m and therefore the general goals of the system have to be analyzed, assigned to its individual components and further detailed. We propose in this paper the MULTSEC method in charge of analyzing the initial goals of the system, determining the types of components that form the system architecture, building quality models for each type and then mapping the goals into detailed requirements which can be measured using quality criteria. We develop in some detail the quality model (compliant with the ISO/IEC 9126-1 quality standard) for the workflow type of tools; we show how the quality model can be used to refine and clarify the requirements in order to guarantee a highly reliable selection result; and we use it to evaluate two particular workflow solutions a- ailable in the market (kept anonymous in the paper). We develop our proposal using a particular selection experience we have recently been involved in, namely the procurement of a document management subsystem to be integrated in an academic data management information system for our university.Peer ReviewedPostprint (author's final draft

    A look at cloud architecture interoperability through standards

    Get PDF
    Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)
    • 

    corecore