12,320 research outputs found
Machine Learning for Multi-Layer Open and Disaggregated Optical Networks
L'abstract è presente nell'allegato / the abstract is in the attachmen
Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
Traditional data centers are designed with a rigid architecture of
fit-for-purpose servers that provision resources beyond the average workload in
order to deal with occasional peaks of data. Heterogeneous data centers are
pushing towards more cost-efficient architectures with better resource
provisioning. In this paper we study the feasibility of using disaggregated
architectures for intensive data applications, in contrast to the monolithic
approach of server-oriented architectures. Particularly, we have tested a
proactive network analysis system in which the workload demands are highly
variable. In the context of the dReDBox disaggregated architecture, the results
show that the overhead caused by using remote memory resources is significant,
between 66\% and 80\%, but we have also observed that the memory usage is one
order of magnitude higher for the stress case with respect to average
workloads. Therefore, dimensioning memory for the worst case in conventional
systems will result in a notable waste of resources. Finally, we found that,
for the selected use case, parallelism is limited by memory. Therefore, using a
disaggregated architecture will allow for increased parallelism, which, at the
same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper
will be presented during the IEEE International Conference on High
Performance Computing and Communications in Bangkok, Thailand. 18 - 20
December, 2017. To be published in the conference proceeding
Understanding O-RAN: Architecture, Interfaces, Algorithms, Security, and Research Challenges
The Open Radio Access Network (RAN) and its embodiment through the O-RAN
Alliance specifications are poised to revolutionize the telecom ecosystem.
O-RAN promotes virtualized RANs where disaggregated components are connected
via open interfaces and optimized by intelligent controllers. The result is a
new paradigm for the RAN design, deployment, and operations: O-RAN networks can
be built with multi-vendor, interoperable components, and can be
programmatically optimized through a centralized abstraction layer and
data-driven closed-loop control. Therefore, understanding O-RAN, its
architecture, its interfaces, and workflows is key for researchers and
practitioners in the wireless community. In this article, we present the first
detailed tutorial on O-RAN. We also discuss the main research challenges and
review early research results. We provide a deep dive of the O-RAN
specifications, describing its architecture, design principles, and the O-RAN
interfaces. We then describe how the O-RAN RAN Intelligent Controllers (RICs)
can be used to effectively control and manage 3GPP-defined RANs. Based on this,
we discuss innovations and challenges of O-RAN networks, including the
Artificial Intelligence (AI) and Machine Learning (ML) workflows that the
architecture and interfaces enable, security and standardization issues.
Finally, we review experimental research platforms that can be used to design
and test O-RAN networks, along with recent research results, and we outline
future directions for O-RAN development.Comment: 33 pages, 16 figures, 3 tables. Submitted for publication to the IEE
dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter
Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft
Robust energy disaggregation using appliance-specific temporal contextual information
An extension of the baseline non-intrusive load monitoring approach for energy disaggregation using temporal contextual information is presented in this paper. In detail, the proposed approach uses a two-stage disaggregation methodology with appliance-specific temporal contextual information in order to capture time-varying power consumption patterns in low-frequency datasets. The proposed methodology was evaluated using datasets of different sampling frequency, number and type of appliances. When employing appliance-specific temporal contextual information, an improvement of 1.5% up to 7.3% was observed. With the two-stage disaggregation architecture and using appliance-specific temporal contextual information, the overall energy disaggregation accuracy was further improved across all evaluated datasets with the maximum observed improvement, in terms of absolute increase of accuracy, being equal to 6.8%, thus resulting in a maximum total energy disaggregation accuracy improvement equal to 10.0%.Peer reviewedFinal Published versio
Energy Disaggregation for Real-Time Building Flexibility Detection
Energy is a limited resource which has to be managed wisely, taking into
account both supply-demand matching and capacity constraints in the
distribution grid. One aspect of the smart energy management at the building
level is given by the problem of real-time detection of flexible demand
available. In this paper we propose the use of energy disaggregation techniques
to perform this task. Firstly, we investigate the use of existing
classification methods to perform energy disaggregation. A comparison is
performed between four classifiers, namely Naive Bayes, k-Nearest Neighbors,
Support Vector Machine and AdaBoost. Secondly, we propose the use of Restricted
Boltzmann Machine to automatically perform feature extraction. The extracted
features are then used as inputs to the four classifiers and consequently shown
to improve their accuracy. The efficiency of our approach is demonstrated on a
real database consisting of detailed appliance-level measurements with high
temporal resolution, which has been used for energy disaggregation in previous
studies, namely the REDD. The results show robustness and good generalization
capabilities to newly presented buildings with at least 96% accuracy.Comment: To appear in IEEE PES General Meeting, 2016, Boston, US
GNPy model of the physical layer for open and disaggregated optical networking [Invited]
Networking technologies are fast evolving to support the request for ubiquitous Internet access that is becoming a fundamental need for the modern and inclusive society, with a dramatic speed-up caused by the COVID-19 emergency. Such evolution needs the development of networks into disaggregated and programmable systems according to the software-defined networking (SDN) paradigm. Wavelength-division multiplexed (WDM) optical transmission and networking is expanding as physical layer technology from core and metro networks to 5G x-hauling and inter- and intra-data-center connections requiring the application of the SDN paradigm at the optical layer based on the WDM optical data transport virtualization. We present the fundamental principles of the open-source project Gaussian Noise in Python (GNPy) for the optical transport virtualization in modeling the WDM optical transmission for open and disaggregated networking. GNPy approximates transparent lightpaths as additive white and Gaussian noise channels and can be used as a vendor-agnostic digital twin for open network planning and management. The quality-of-transmission degradation of each network element is independently modeled to allow disaggregated network management. We describe the GNPy models for fiber propagation, optical amplifiers, and reconfigurable add/drop multiplexers together with modeling of coherent transceivers from the back-to-back characterization. We address the use of GNPy as a vendor-agnostic design and planning tool and as physical layer virtualization in software-defined optical networking. (C) 2022 Optica Publishing Grou
- …