3,295 research outputs found

    Integrated quality and enhancement review: Summative review: Wyggeston & Queen Elizabeth I College

    Get PDF

    Muistikeskeisen radioverkon vaikutus tietopääsyjen suoritusnopeuteen

    Get PDF
    Future 5G-based mobile networks will be largely defined by virtualized network functions (VNF). The related computing is being moved to cloud where a set of servers is provided to run all the software components of the VNFs. Such software component can be run on any server in the mobile network cloud infrastructure. The servers conventionally communicate via TCP/IP -network. To realize planned low-latency use cases in 5G, some servers are placed to data centers near the end users (edge clouds). Many of these use cases involve data accesses from one VNF to another, or to other network elements. The accesses are desired to take as little time as possible to stay within the stringent latency requirements of the new use cases. As a possible approach for reaching this, a novel memory-centric platform was studied. The main ideas of the memory-centric platform are to collapse the hierarchy between volatile and persistent memory by utilizing non-volatile memory (NVM) and use memory-semantic communication between computer components. In this work, a surrogate memory-centric platform was set up as a storage for VNFs and the latency of data accesses from VNF application was measured in different experiments. Measurements against a current platform showed that memory-centric platform was significantly faster to access than the current, TCP/IP using platform. Measurements for accessing RAM with different memory bandwidths within the memory-centric platform showed that the order of latency was roughly independent of the available memory bandwidth. These results mean that memory-centric platform is a promising alternative to be used as a storage system for edge clouds. However, more research is needed to study how other service qualities, such as low latency variation, are fulfilled in memory-centric platform in a VNF environment.Tulevaisuuden 5G:hen perustuvissa mobiiliverkoissa verkkolaitteisto on pääosin virtualisoitu. Tällaisen verkon virtuaaliverkkolaite (VNF) koostuu ohjelmistokomponenteista, joita ajetaan tarkoitukseen määrätyiltä mobiiliverkon pilven palvelimilta. Ohjelmistokomponentti voi olla ajossa millä vain mobiiliverkon näistä pilvi-infrastruktuurin palvelimista. Palvelimet on tavallisesti yhdistetty TCP/IP-verkolla. Jotta suunnitellut alhaisen viiveen käyttötapaukset voisivat toteutua 5G-verkoissa, pilvipalvelimia on sijoitettu niin kutsuttuihin reunadatakeskuksiin lähelle loppukäyttäjiä. Monet näistä käyttötapauksista sisältävät tietopääsyjä virtuaaliverkkolaitteesta toisiin tai muihin verkkoelementteihin. Tietopääsyviiveen halutaan olevan mahdollisimman pieni, jotta käyttötapausten tiukoissa viiverajoissa pysytään. Mahdollisena lähestymistapana tietopääsyviiveen minimoimiseen tutkittiin muistikeskeistä laitteistoalustaa. Tämän laitteistoalustan pääperiaatteita on korvata nykyiset lyhytkestoiset ja pysyvät muistit haihtumattomalla muistilla sekä kommunikoida muistisemanttisilla viesteillä tietokonekomponenttien kesken. Tässä työssä muistikeskeisyyttä hyödyntävää sijaislaitteistoa käytettiin VNF-datan varastona ja ohjelmistokomponenttien tietopääsyviivettä sinne mitattiin erilaisilla kokeilla. Kokeet osoittivat nykyisen, TCP/IP-pohjaisen alustan huomattavasti muistikeskeistä alustaa hitaammaksi. Toiseksi, kokeet osoittivat tietopääsyviiveiden olevan saman suuruisia muistikeskeisen alustan sisällä, riippumatta saatavilla olevasta muistikaistasta. Tulokset merkitsevät, että muistikeskeinen alusta on lupaava vaihtoehto reunadatakeskuksen tietovarastojärjestelmäksi. Lisää tutkimusta alustasta kuitenkin tarvitaan, jotta muiden palvelun laatukriteerien, kuten matalan viivevaihtelun, toteutumisesta saadaan tietoa

    Storage Area Networks

    Get PDF
    This tutorial compares Storage area Network (SAN) technology with previous storage management solutions with particular attention to promised benefits of scalability, interoperability, and high-speed LAN-free backups. The paper provides an overview of what SANs are, why invest in them, and how SANs can be managed. The paper also discusses a primary management concern, the interoperability of vendor-specific SAN solutions. Bluefin, a storage management interface and interoperability solution is also explained. The paper concludes with discussion of SAN-related trends and implications for practice and research

    AI-native Interconnect Framework for Integration of Large Language Model Technologies in 6G Systems

    Full text link
    The evolution towards 6G architecture promises a transformative shift in communication networks, with artificial intelligence (AI) playing a pivotal role. This paper delves deep into the seamless integration of Large Language Models (LLMs) and Generalized Pretrained Transformers (GPT) within 6G systems. Their ability to grasp intent, strategize, and execute intricate commands will be pivotal in redefining network functionalities and interactions. Central to this is the AI Interconnect framework, intricately woven to facilitate AI-centric operations within the network. Building on the continuously evolving current state-of-the-art, we present a new architectural perspective for the upcoming generation of mobile networks. Here, LLMs and GPTs will collaboratively take center stage alongside traditional pre-generative AI and machine learning (ML) algorithms. This union promises a novel confluence of the old and new, melding tried-and-tested methods with transformative AI technologies. Along with providing a conceptual overview of this evolution, we delve into the nuances of practical applications arising from such an integration. Through this paper, we envisage a symbiotic integration where AI becomes the cornerstone of the next-generation communication paradigm, offering insights into the structural and functional facets of an AI-native 6G network

    Level-1 Milestone 350 Definitions v1

    Get PDF
    This milestone is the direct result of work that started seven years ago with the planning for a 100-teraFLOP platform and will be satisfied when 100 teraFLOPS is placed in operation and readied for Stockpile Stewardship Program simulations. The end product of this milestone will be a production-level, high-performance computing system, code named Purple, designed to be used to solve the most demanding stockpile stewardship problems, that is, the large-scale application problems at the edge of our understanding of weapon physics. This fully functional 100 teraFLOPS system must be able to serve a diverse scientific and engineering workload. It must also have a robust code development and production environment, both of which facilitate the workload requirements. This multi-year effort includes major activities in contract management, facilities, infrastructure, system software, and user environment and support. Led by LLNL, the trilabs defined the statement of work for a 100-teraFLOP system that resulted in a contract with IBM known as the Purple contract. LLNL worked with IBM throughout the contract period to resolve issues and collaborated with the Program to resolve contractual issues to ensure delivery of a platform that best serves the Program for a reasonable cost. The Purple system represents a substantial increase in the classified compute resources at LLNL for NNSA. The center computer environment must be designed to accept the Purple system and to scale with the increase of compute resources to achieve required end-to-end services. Networking, archival storage, visualization servers, global file systems, and system software will all be enhanced to support Purple's size and architecture. IBM and LLNL are sharing responsibility for Purple's system software. LLNL is responsible for the scheduler, resource manager, and some code development tools. Through the Purple contract, IBM is responsible for the remainder of the system software including the operating system, parallel file system, and runtime environment. LLNL, LANL, and SNL share responsibility for the Purple user environment. Since LLNL is the host for Purple, LLNL has the greatest responsibility. LLNL will provide customer support for Purple to the tri-labs and as such has the lead for user documentation, negotiating the Purple usage model, mapping of the ASC computational environment requirements to the Purple environment, and demonstrating those requirements have been met. In addition, LLNL will demonstrate important capabilities of the computing environment including full functionality of visualization tools, file transport between Purple and remote site file systems, and the build environment for principle ASC codes. LANL and SNL are responsible for delivering unique capabilities in support of their users, porting important applications and libraries, and demonstrating remote capabilities. The key capabilities that LANL and SNL will test are user authorization and authentication, data transfer, file system, data management, and visualization. SNL and LANL should port and run in production mode a few key applications on a substantial number of Purple nodes
    corecore