2,078 research outputs found

    Service broker based on cloud service description language

    Get PDF

    Service broker for cloud service description language

    Get PDF
    Cloud Service Description Language (CSDL), initiative and discourse, is concentrated to deploy applications on various cloud platforms without modifying source-code. Semantic topology and orchestration of applications provides practical advantage for service providers with ability of interoperability, portability and unified interfaces. However, this has also resulted problems for consumers to identify the appropriate services spread over swarm platforms. The advantage, with common CSDL such as Topology and Orchestration Specification for Cloud Applications (TOSCA), becomes problematic for consumers. Service providers will have different technical and business details such as: discovery, pricing, licensing or composition depending upon deployed platform; therefore, selection of service becomes challenging and requires human effort. Service Broker design is presented for TOSCA framework only; however, the suggested scheme is generic and adaptable to accommodate similar standards of CSDL

    A Case for a New IT Ecosystem: On-The-Fly Computing

    Get PDF
    The complexity of development and deployment in today’s IT world is enormous. Despite the existence of so many pre-fabricated components, frameworks, cloud providers, etc., building IT systems still remains a major challenge and most likely overtaxes even a single ambi- tious developer. This results in spreading such develop- ment and deployment tasks over different team members with their own specialization. Nevertheless, not even highly competent IT personnel can easily succeed in developing and deploying a nontrivial application that comprises a multitude of different components running on different platforms (from frontend to backend). Current industry trends such as DevOps strive to keep development and deployment tasks tightly integrated. This, however, only partially addresses the underlying complexity of either of these two tasks. But would it not be desirable to simplify these tasks in the first place, enabling one person – maybe even a non-expert – to deal with all of them? Today’s approaches to the development and deployment of complex IT applications are not up to this challenge. ‘‘On-The-Fly Computing’’ offers an approach to tackle this challenge by providing complex IT services through largely automated configuration and execution. The configuration of such services is based on simple, flexibly combinable services that are provided by different software providers and traded in a market. This constitutes a highly relevant challenge for research in many branches of computer science, informa- tion systems, business administration, and economics. In this research note, it is analyzed which pieces of this new ‘‘On-The-Fly Computing’’ ecosystem already exist and where additional, often significant research efforts are necessary

    A Resource Publication and Discovery Framework and Broker-Based Architecture for Network Virtualization Environment

    Get PDF
    The Internet has received a phenomenal success over the past few decades. However, the increasing demands on the Internet usage and the rapid evolution of the applications and services provided over the Internet have demonstrated that the current Internet architecture is unsuitable for supporting many types of applications. Moreover, its ubiquity and multi-provider nature make nearly impossible the introduction of radical changes or improvements without coordination and consensus between many providers. Thus, any technological changes in the current Internet architecture could result in unintended consequences on the overall Internet usage. Network virtualization is considered as promising, yet challenging, solution to overcome these limitations. It commonly refers to the creation of several isolated logical networks that can coexist on the same shared physical network infrastructures. Its key concept is to enable several network architectures to run concurrently in a multi-role-oriented environment in which the role of the traditional Internet Service Provider (ISP) is decoupled into several roles such as infrastructure provider (InP), virtual network provider (VNP) and service provider (SP). Despite the promising benefits, this concept is associated with many challenges. These, among others, include the description and publication as well as discovery of resources on which virtual networks are deployed. In this thesis, we define a broker-based architecture that provides functions for publishing, discovering and negotiating as well as instantiating and managing resources in network virtualization environment. We proposed an information model that assists various providers in describing the resources and services they offer and we implemented a proof of concept prototype to demonstrate the feasibility of the proposed architecture. Moreover, we have conducted extensive experiments to evaluate the performance and the scalability of the implemented system

    Responding to Cross Border Child Trafficking in South Asia: An Analysis of the Feasibility of a Technologically Enabled Missing Child Alert System

    Get PDF
    This report examines the feasibility of a technologically enabled system to help respond to the phenomenon of cross-border child trafficking in South Asia, and makes recommendations on how to proceed with a pilot project in the selected areas of Bangladesh, Nepal and India. The study was commissioned by the Missing Child Alert (MCA) programme which is an initiative led by Plan. MCA is an initiative to address cross-border child trafficking in South Asia, led by Plan. The aim of the programme is to link existing institutions, mechanisms and resources in order to tackle the phenomenon from a regional perspective. To achieve this, Plan propose to implement a technologically equipped, institutionalised system of alert that can assist in the rescue, rehabilitation, repatriation and reintegration of children who are at risk of, or are victims of, cross-border trafficking

    Integrated intelligent systems for industrial automation: the challenges of Industry 4.0, information granulation and understanding agents .

    Get PDF
    The objective of the paper consists in considering the challenges of new automation paradigm Industry 4.0 and reviewing the-state-of-the-art in the field of its enabling information and communication technologies, including Cyberphysical Systems, Cloud Computing, Internet of Things and Big Data. Some ways of multi-dimensional, multi-faceted industrial Big Data representation and analysis are suggested. The fundamentals of Big Data processing with using Granular Computing techniques have been developed. The problem of constructing special cognitive tools to build artificial understanding agents for Integrated Intelligent Enterprises has been faced

    Context Aware Middleware Architectures: Survey and Challenges

    Get PDF
    Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work

    Current Trends and New Challenges of Databases and Web Applications for Systems Driven Biological Research

    Get PDF
    Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and Web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases and Web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
    • …
    corecore