5,454 research outputs found

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    6G Network AI Architecture for Everyone-Centric Customized Services

    Full text link
    Mobile communication standards were developed for enhancing transmission and network performance by using more radio resources and improving spectrum and energy efficiency. How to effectively address diverse user requirements and guarantee everyone's Quality of Experience (QoE) remains an open problem. The Sixth Generation (6G) mobile systems will solve this problem by utilizing heterogenous network resources and pervasive intelligence to support everyone-centric customized services anywhere and anytime. In this article, we first coin the concept of Service Requirement Zone (SRZ) on the user side to characterize and visualize the integrated service requirements and preferences of specific tasks of individual users. On the system side, we further introduce the concept of User Satisfaction Ratio (USR) to evaluate the system's overall service ability of satisfying a variety of tasks with different SRZs. Then, we propose a network Artificial Intelligence (AI) architecture with integrated network resources and pervasive AI capabilities for supporting customized services with guaranteed QoEs. Finally, extensive simulations show that the proposed network AI architecture can consistently offer a higher USR performance than the cloud AI and edge AI architectures with respect to different task scheduling algorithms, random service requirements, and dynamic network conditions

    From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows.

    Get PDF
    © The Author(s), 2019. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Vance, T. C., Wengren, M., Burger, E., Hernandez, D., Kearns, T., Medina-Lopez, E., Merati, N., O'Brien, K., O'Neil, J., Potemrag, J. T., Signell, R. P., & Wilcox, K. From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows. Frontiers in Marine Science, 6(211), (2019), doi:10.3389/fmars.2019.00211.Advances in ocean observations and models mean increasing flows of data. Integrating observations between disciplines over spatial scales from regional to global presents challenges. Running ocean models and managing the results is computationally demanding. The rise of cloud computing presents an opportunity to rethink traditional approaches. This includes developing shared data processing workflows utilizing common, adaptable software to handle data ingest and storage, and an associated framework to manage and execute downstream modeling. Working in the cloud presents challenges: migration of legacy technologies and processes, cloud-to-cloud interoperability, and the translation of legislative and bureaucratic requirements for “on-premises” systems to the cloud. To respond to the scientific and societal needs of a fit-for-purpose ocean observing system, and to maximize the benefits of more integrated observing, research on utilizing cloud infrastructures for sharing data and models is underway. Cloud platforms and the services/APIs they provide offer new ways for scientists to observe and predict the ocean’s state. High-performance mass storage of observational data, coupled with on-demand computing to run model simulations in close proximity to the data, tools to manage workflows, and a framework to share and collaborate, enables a more flexible and adaptable observation and prediction computing architecture. Model outputs are stored in the cloud and researchers either download subsets for their interest/area or feed them into their own simulations without leaving the cloud. Expanded storage and computing capabilities make it easier to create, analyze, and distribute products derived from long-term datasets. In this paper, we provide an introduction to cloud computing, describe current uses of the cloud for management and analysis of observational data and model results, and describe workflows for running models and streaming observational data. We discuss topics that must be considered when moving to the cloud: costs, security, and organizational limitations on cloud use. Future uses of the cloud via computational sandboxes and the practicalities and considerations of using the cloud to archive data are explored. We also consider the ways in which the human elements of ocean observations are changing – the rise of a generation of researchers whose observations are likely to be made remotely rather than hands on – and how their expectations and needs drive research towards the cloud. In conclusion, visions of a future where cloud computing is ubiquitous are discussed.This is PMEL contribution 4873

    6G Network AI Architecture for Everyone-Centric Customized Services

    Get PDF
    Mobile communication standards were developed for enhancing transmission and network performance by using more radio resources and improving spectrum and energy efficiency. How to effectively address diverse user requirements and guarantee everyone's Quality of Experience (QoE) remains an open problem. The Sixth Generation (6G) mobile systems will solve this problem by utilizing heterogenous network resources and pervasive intelligence to support everyone-centric customized services anywhere and anytime. In this article, we first coin the concept of Service Requirement Zone (SRZ) on the user side to characterize and visualize the integrated service requirements and preferences of specific tasks of individual users. On the system side, we further introduce the concept of User Satisfaction Ratio (USR) to evaluate the system's overall service ability of satisfying a variety of tasks with different SRZs. Then, we propose a network Artificial Intelligence (AI) architecture with integrated network resources and pervasive AI capabilities for supporting customized services with guaranteed QoEs. Finally, extensive simulations show that the proposed network AI architecture can consistently offer a higher USR performance than the cloud AI and edge AI architectures with respect to different task scheduling algorithms, random service requirements, and dynamic network conditions

    Using Distributed Ledger Technologies in VANETs to Achieve Trusted Intelligent Transportation Systems

    Get PDF
    With the recent advancements in the networking realm of computers as well as achieving real-time communication between devices over the Internet, IoT (Internet of Things) devices have been on the rise; collecting, sharing, and exchanging data with other connected devices or databases online, enabling all sorts of communications and operations without the need for human intervention, oversight, or control. This has caused more computer-based systems to get integrated into the physical world, inching us closer towards developing smart cities. The automotive industry, alongside other software developers and technology companies have been at the forefront of this advancement towards achieving smart cities. Currently, transportation networks need to be revamped to utilize the massive amounts of data being generated by the public’s vehicle’s on-board devices, as well as other integrated sensors on public transit systems, local roads, and highways. This will create an interconnected ecosystem that can be leveraged to improve traffic efficiency and reliability. Currently, Vehicular Ad-hoc Networks (VANETs) such as vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-grid (V2G) communications, all play a major role in supporting road safety, traffic efficiency, and energy savings. To protect these devices and the networks they form from being targets of cyber-related attacks, this paper presents ideas on how to leverage distributed ledger technologies (DLT) to establish secure communication between vehicles that is decentralized, trustless, and immutable. Incorporating IOTA’s protocols, as well as utilizing Ethereum’s smart contracts functionality and application concepts with VANETs, all interoperating with Hyperledger’s Fabric framework, several novel ideas can be implemented to improve traffic safety and efficiency. Such a modular design also opens up the possibility to further investigate use cases of the blockchain and distributed ledger technologies in creating a decentralized intelligent transportation system (ITS)

    Network Neutrality and the Need for a Technological Turn in Internet Scholarship

    Get PDF
    To most social scientists, the technical details of how the Internet actually works remain arcane and inaccessible. At the same time, convergence is forcing scholars to grapple with how to apply regulatory regimes developed for traditional media to a world in which all services are provided via an Internet-based platform. This chapter explores the problems caused by the lack of familiarity with the underlying technology, using as its focus the network neutrality debate that has dominated Internet policy for the past several years. The analysis underscores a surprising lack of sophistication in the current debate. Unfamiliarity with the Internet’s architecture has allowed some advocates to characterize prioritization of network traffic as an aberration, when in fact it is a central feature designed into the network since its inception. The lack of knowledge has allowed advocates to recast pragmatic engineering concepts as supposedly inviolable architectural principles, effectively imbuing certain types of political advocacy with a false sense of scientific legitimacy. As the technologies comprising the network continue to change and the demands of end users create pressure on the network to further evolve, the absence of technical grounding risks making the status quo seem like a natural construct that cannot or should not be changed
    • …
    corecore