738 research outputs found

    Intermediate CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the activities of WP1 into developing the CONNECT architecture that will underpin this solution. In this respect, we present the following key contributions from the second year. Firstly, the intermediary CONNECT architecture that presents a more concrete view of the technologies and principles employed to enable interoperability between heterogeneous networked systems. Secondly, the design and implementation of the discovery enabler with emphasis on the approaches taken to match compatible networked systems. Thirdly, the realisation of CONNECTors that can be deployed in the environment; we provide domain specific language solutions to generate and translate between middleware protocols. Fourthly, we highlight the role of ontologies within CONNECT and demonstrate how ontologies crosscut all functionality within the CONNECT architecture

    A Resource Publication and Discovery Framework and Broker-Based Architecture for Network Virtualization Environment

    Get PDF
    The Internet has received a phenomenal success over the past few decades. However, the increasing demands on the Internet usage and the rapid evolution of the applications and services provided over the Internet have demonstrated that the current Internet architecture is unsuitable for supporting many types of applications. Moreover, its ubiquity and multi-provider nature make nearly impossible the introduction of radical changes or improvements without coordination and consensus between many providers. Thus, any technological changes in the current Internet architecture could result in unintended consequences on the overall Internet usage. Network virtualization is considered as promising, yet challenging, solution to overcome these limitations. It commonly refers to the creation of several isolated logical networks that can coexist on the same shared physical network infrastructures. Its key concept is to enable several network architectures to run concurrently in a multi-role-oriented environment in which the role of the traditional Internet Service Provider (ISP) is decoupled into several roles such as infrastructure provider (InP), virtual network provider (VNP) and service provider (SP). Despite the promising benefits, this concept is associated with many challenges. These, among others, include the description and publication as well as discovery of resources on which virtual networks are deployed. In this thesis, we define a broker-based architecture that provides functions for publishing, discovering and negotiating as well as instantiating and managing resources in network virtualization environment. We proposed an information model that assists various providers in describing the resources and services they offer and we implemented a proof of concept prototype to demonstrate the feasibility of the proposed architecture. Moreover, we have conducted extensive experiments to evaluate the performance and the scalability of the implemented system

    Revised CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the revised CONNECT architecture, highlighting the integration of the work carried out to integrate the CONNECT enablers developed by the different partners; in particular, we present the progress of this work towards a finalised concrete architecture. In the third year this architecture has been enhanced to: i) produce concrete CONNECTors, ii) match networked systems based upon their goals and intent, and iii) use learning technologies to find the affordance of a system. We also report on the application of the CONNECT approach to streaming based systems, further considering exploitation of CONNECT in the mobile environment

    Enhancement of the usability of SOA services for novice users

    Get PDF
    Recently, the automation of service integration has provided a significant advantage in delivering services to novice users. This art of integrating various services is known as Service Composition and its main purpose is to simplify the development process for web applications and facilitates reuse of services. It is one of the paradigms that enables services to end-users (i.e.service provisioning) through the outsourcing of web contents and it requires users to share and reuse services in more collaborative ways. Most service composers are effective at enabling integration of web contents, but they do not enable universal access across different groups of users. This is because, the currently existing content aggregators require complex interactions in order to create web applications (e.g., Web Service Business Process Execution Language (WS-BPEL)) as a result not all users are able to use such web tools. This trend demands changes in the web tools that end-users use to gain and share information, hence this research uses Mashups as a service composition technique to allow novice users to integrate publicly available Service Oriented Architecture (SOA) services, where there is a minimal active web application development. Mashups being the platforms that integrate disparate web Application Programming Interfaces (APIs) to create user defined web applications; presents a great opportunity for service provisioning. However, their usability for novice users remains invalidated since Mashup tools are not easy to use they require basic programming skills which makes the process of designing and creating Mashups difficult. This is because Mashup tools access heterogeneous web contents using public web APIs and the process of integrating them become complex since web APIs are tailored by different vendors. Moreover, the design of Mashup editors is unnecessary complex; as a result, users do not know where to start when creating Mashups. This research address the gap between Mashup tools and usability by the designing and implementing a semantically enriched Mashup tool to discover, annotate and compose APIs to improve the utilization of SOA services by novice users. The researchers conducted an analysis of the already existing Mashup tools to identify challenges and weaknesses experienced by novice Mashup users. The findings from the requirement analysis formulated the system usability requirements that informed the design and implementation of the proposed Mashup tool. The proposed architecture addressed three layers: composition, annotation and discovery. The researchers developed a simple Mashup tool referred to as soa-Services Provisioner (SerPro) that allowed novice users to create web application flexibly. Its usability and effectiveness was validated. The proposed Mashup tool enhanced the usability of SOA services, since data analysis and results showed that it was usable to novice users by scoring a System Usability Scale (SUS) score of 72.08. Furthermore, this research discusses the research limitations and future work for further improvements

    Achieving Autonomic Web Service Compositions with Models at Runtime

    Full text link
    Over the last years, Web services have become increasingly popular. It is because they allow businesses to share data and business process (BP) logic through a programmatic interface across networks. In order to reach the full potential of Web services, they can be combined to achieve specifi c functionalities. Web services run in complex contexts where arising events may compromise the quality of the system (e.g. a sudden security attack). As a result, it is desirable to count on mechanisms to adapt Web service compositions (or simply called service compositions) according to problematic events in the context. Since critical systems may require prompt responses, manual adaptations are unfeasible in large and intricate service compositions. Thus, it is suitable to have autonomic mechanisms to guide their self-adaptation. One way to achieve this is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone as the number of con figurations for the service composition grows. The goal of this thesis is to provide a model-driven framework to guide autonomic adjustments of context-aware service compositions. This framework spans over design time and runtime to face arising known and unknown context events (i.e., foreseen and unforeseen at design time) in the close and open worlds respectively. At design time, we propose a methodology for creating the models that guide autonomic changes. Since Service-Oriented Architecture (SOA) lacks support for systematic reuse of service operations, we represent service operations as Software Product Line (SPL) features in a variability model. As a result, our approach can support the construction of service composition families in mass production-environments. In order to reach optimum adaptations, the variability model and its possible con figurations are verifi ed at design time using Constraint Programming (CP). At runtime, when problematic events arise in the context, the variability model is leveraged for guiding autonomic changes of the service composition. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. Changes in the variability model are refl ected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL) code, which are deployed at runtime. Model-driven strategies guide the safe migration of running service composition instances. Under the closed-world assumption, the possible context events are fully known at design time. These events will eventually trigger the dynamic adaptation of the service composition. Nevertheless, it is diffi cult to foresee all the possible situations arising in uncertain contexts where service compositions run. Therefore, we extend our framework to cover the dynamic evolution of service compositions to deal with unexpected events in the open world. If model adaptations cannot solve uncertainty, the supporting models self-evolve according to abstract tactics that preserve expected requirements.Alférez Salinas, GH. (2013). Achieving Autonomic Web Service Compositions with Models at Runtime [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34672TESI

    Development of an integrated product information management system

    Get PDF
    This thesis reports on a research project undertaken over a four year period investigating and developing a software framework and application for integrating and managing building product information for construction engineering. The research involved extensive literature research, observation of the industry practices and interviews with construction industry practitioners and systems implementers to determine how best to represent and present product information to support the construction process. Applicable product models for information representation were reviewed and evaluated to determine present suitability. The IFC product model was found to be the most applicable. Investigations of technologies supporting the product model led to the development of a software tool, the IFC Assembly Viewer, which aided further investigations into the suitability of the product model (in its current state) for the exchange and sharing of product information. A software framework, or reusable software design and application, called PROduct Information Management System (PROMIS), was developed based on a non-standard product model but with flexibility to work with the IFC product model when sufficiently mature. The software comprises three subsystems namely: ProductWeb, ModelManager.NET and Product/Project Service (or P2Service). The key features of this system were shared project databases, parametric product specification, integration of product information sources, and application interaction and integration through interface components. PROMIS was applied to and tested with a modular construction business for the management of product information and for integration of product and project information through the design and construction (production) process

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system

    Designing and prototyping WebRTC and IMS integration using open source tools

    Get PDF
    WebRTC, or Web Real-time Communications, is a collection of web standards that detail the mechanisms, architectures and protocols that work together to deliver real-time multimedia services to the web browser. It represents a significant shift from the historical approach of using browser plugins, which over time, have proven cumbersome and problematic. Furthermore, it adopts various Internet standards in areas such as identity management, peer-to-peer connectivity, data exchange and media encoding, to provide a system that is truly open and interoperable. Given that WebRTC enables the delivery of multimedia content to any Internet Protocol (IP)-enabled device capable of hosting a web browser, this technology could potentially be used and deployed over millions of smartphones, tablets and personal computers worldwide. This service and device convergence remains an important goal of telecommunication network operators who seek to enable it through a converged network that is based on the IP Multimedia Subsystem (IMS). IMS is an IP-based subsystem that sits at the core of a modern telecommunication network and acts as the main routing substrate for media services and applications such as those that WebRTC realises. The combination of WebRTC and IMS represents an attractive coupling, and as such, a protracted investigation could help to answer important questions around the technical challenges that are involved in their integration, and the merits of various design alternatives that present themselves. This thesis is the result of such an investigation and culminates in the presentation of a detailed architectural model that is validated with a prototypical implementation in an open source testbed. The model is built on six requirements which emerge from an analysis of the literature, including previous interventions in IMS networks and a key technical report on design alternatives. Furthermore, this thesis argues that the client architecture requires support for web-oriented signalling, identity and call handling techniques leading to a potential for IMS networks to natively support these techniques as operator networks continue to grow and develop. The proposed model advocates the use of SIP over WebSockets for signalling and DTLS-SRTP for media to enable one-to-one communication and can be extended through additional functions resulting in a modular architecture. The model was implemented using open source tools which were assembled to create an experimental network testbed, and tests were conducted demonstrating successful cross domain communications under various conditions. The thesis has a strong focus on enabling ordinary software developers to assemble a prototypical network such as the one that was assembled and aims to enable experimentation in application use cases for integrated environments

    Interoperability of Enterprise Software and Applications

    Get PDF
    corecore