4 research outputs found

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Unsupervised Monitoring of Networkand Service Behaviour Using SelfOrganizing Maps

    No full text

    Transforming Large-Scale Virtualized Networks: Advancements in Latency Reduction, Availability Enhancement, and Security Fortification

    Get PDF
    In today’s digital age, the increasing demand for networks, driven by the proliferation of connected devices, data-intensive applications, and transformative technologies, necessitates robust and efficient network infrastructure. This thesis addresses the challenges posed by virtualization in 5G networking and focuses on enhancing next-generation Radio Access Networks (RANs), particularly Open-RAN (O-RAN). The objective is to transform virtualized networks into highly reliable, secure, and latency-aware systems. To achieve this, the thesis proposes novel strategies for virtual function placement, traffic steering, and virtual function security within O-RAN. These solutions utilize optimization techniques such as binary integer programming, mixed integer binary programming, column generation, and machine learning algorithms, including supervised learning and deep reinforcement learning. By implementing these contributions, network service providers can deploy O-RAN with enhanced reliability, speed, and security, specifically tailored for Ultra-Reliable and Low Latency Communications use cases. The optimized RAN virtualization achieved through this research unlocks a new era in network architecture that can confidently support URLLC applications, including Autonomous Vehicles, Industrial Automation and Robotics, Public Safety and Emergency Services, and Smart Grids
    corecore