1,286 research outputs found

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    A VISUAL DESIGN METHOD AND ITS APPLICATION TO HIGH RELIABILITY HYPERMEDIA SYSTEMS

    Get PDF
    This work addresses the problem of the production of hypermedia documentation for applications that require high reliability, particularly technical documentation in safety critical industries. One requirement of this application area is for the availability of a task-based organisation, which can guide and monitor such activities as maintenance and repair. In safety critical applications there must be some guarantee that such sequences are correctly presented. Conventional structuring and design methods for hypermedia systems do not allow such guarantees to be made. A formal design method that is based on a process algebra is proposed as a solution to this problem. Design methods of this kind need to be accessible to information designers. This is achieved by use of a technique already familiar to them: the storyboard. By development of a storyboard notation that is syntactically equivalent to a process algebra a bridge is made between information design and computer science, allowing formal analysis and refinement of the specification drafted by information designers. Process algebras produce imperative structures that do not map easily into the declarative formats used for some hypermedia systems, but can be translated into concurrent programs. This translation process, into a language developed by the author, called ClassiC, is illustrated and the properties that make ClassiC a suitable implementation target discussed. Other possible implementation targets are evaluated, and a comparative illustration given of translation into another likely target, Java

    QoS Contract Negotiation in Distributed Component-Based Software

    Get PDF
    Currently, several mature and commercial component models (for e.g. EJB, .NET, COM+) exist on the market. These technologies were designed largely for applications with business-oriented non-functional requirements such as data persistence, confidentiality, and transactional support. They provide only limited support for the development of components and applications with non-functional properties (NFPs) like QoS (e.g. throughput, response time). The integration of QoS into component infrastructure requires among other things the support of components’ QoS contract specification, negotiation, adaptation, etc. This thesis focuses on contract negotiation. For applications in which the consideration of non-functional properties (NFPs) is essential (e.g. Video-on-Demand, eCommerce), a component-based solution demands the appropriate composition of the QoS contracts specified at the different ports of the collaborating components. The ports must be properly connected so that the QoS level required by one is matched by the QoS level provided by the other. Generally, QoS contracts of components depend on run-time resources (e.g. network bandwidth, CPU time) or quality attributes to be established dynamically and are usually specified in multiple QoS-Profiles. QoS contract negotiation enables the selection of appropriate concrete QoS contracts between collaborating components. In our approach, the component containers perform the contract negotiation at run-time. This thesis addresses the QoS contract negotiation problem by first modelling it as a constraint satisfaction optimization problem (CSOP). As a basis for this modelling, the provided and required QoS as well as resource demand are specified at the component level. The notion of utility is applied to select a good solution according to some negotiation goal (e.g. user’s satisfaction). We argue that performing QoS contract negotiation in multiple phases simplifies the negotiation process and makes it more efficient. Based on such classification, the thesis presents heuristic algorithms that comprise coarse-grained and fine-grained negotiations for collaborating components deployed in distributed nodes in the following scenarios: (i) single-client - single-server, (ii) multiple-clients, and (iii) multi-tier scenarios. To motivate the problem as well as to validate the proposed approach, we have examined three componentized distributed applications. These are: (i) video streaming, (ii) stock quote, and (iii) billing (to evaluate certain security properties). An experiment has been conducted to specify the QoS contracts of the collaborating components in one of the applications we studied. In a run-time system that implements our algorithm, we simulated different behaviors concerning: (i) user’s QoS requirements and preferences, (ii) resource availability conditions concerning the client, server, and network bandwidth, and (iii) the specified QoS-Profiles of the collaborating components. Under various conditions, the outcome of the negotiation confirms the claim we made with regard to obtaining a good solution

    QoS Contract Negotiation in Distributed Component-Based Software

    Get PDF
    Currently, several mature and commercial component models (for e.g. EJB, .NET, COM+) exist on the market. These technologies were designed largely for applications with business-oriented non-functional requirements such as data persistence, confidentiality, and transactional support. They provide only limited support for the development of components and applications with non-functional properties (NFPs) like QoS (e.g. throughput, response time). The integration of QoS into component infrastructure requires among other things the support of components’ QoS contract specification, negotiation, adaptation, etc. This thesis focuses on contract negotiation. For applications in which the consideration of non-functional properties (NFPs) is essential (e.g. Video-on-Demand, eCommerce), a component-based solution demands the appropriate composition of the QoS contracts specified at the different ports of the collaborating components. The ports must be properly connected so that the QoS level required by one is matched by the QoS level provided by the other. Generally, QoS contracts of components depend on run-time resources (e.g. network bandwidth, CPU time) or quality attributes to be established dynamically and are usually specified in multiple QoS-Profiles. QoS contract negotiation enables the selection of appropriate concrete QoS contracts between collaborating components. In our approach, the component containers perform the contract negotiation at run-time. This thesis addresses the QoS contract negotiation problem by first modelling it as a constraint satisfaction optimization problem (CSOP). As a basis for this modelling, the provided and required QoS as well as resource demand are specified at the component level. The notion of utility is applied to select a good solution according to some negotiation goal (e.g. user’s satisfaction). We argue that performing QoS contract negotiation in multiple phases simplifies the negotiation process and makes it more efficient. Based on such classification, the thesis presents heuristic algorithms that comprise coarse-grained and fine-grained negotiations for collaborating components deployed in distributed nodes in the following scenarios: (i) single-client - single-server, (ii) multiple-clients, and (iii) multi-tier scenarios. To motivate the problem as well as to validate the proposed approach, we have examined three componentized distributed applications. These are: (i) video streaming, (ii) stock quote, and (iii) billing (to evaluate certain security properties). An experiment has been conducted to specify the QoS contracts of the collaborating components in one of the applications we studied. In a run-time system that implements our algorithm, we simulated different behaviors concerning: (i) user’s QoS requirements and preferences, (ii) resource availability conditions concerning the client, server, and network bandwidth, and (iii) the specified QoS-Profiles of the collaborating components. Under various conditions, the outcome of the negotiation confirms the claim we made with regard to obtaining a good solution

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Ubiquitous Computing

    Get PDF
    The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact. Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases. Topics covered include: Safety Assessment, Reliability Analysis, Critical Systems and Applications, Functional Safety, Dependability Validation, Dependable Software Systems, Embedded Systems, System Certification
    corecore