3,782 research outputs found

    Autonomous and reliable operation of multilayer optical networks

    Get PDF
    This Ph.D. thesis focuses on the reliable autonomous operation of multilayer optical networks. The first objective focuses on the reliability of the optical network and proposes methods for health analysis related to Quality of Transmission (QoT) degradation. Such degradation is produced by soft-failures in optical devices and fibers in core and metro segments of the operators’ transport networks. Here, we compare estimated and measured QoT in the optical transponder by using a QoT tool based on GNPy. We show that the changes in the values of input parameters of the QoT model representing optical devices can explain the deviations and degradation in performance of such devices. We use reverse engineering to estimate the value of those parameters that explain the observed QoT. We show by simulation a large anticipation in soft-failure detection, localization and identification of degradation before affecting the network. Finally, for validating our approach, we experimentally observe the high accuracy in the estimation of the modeling parameters. The second objective focuses on multilayer optical networks, where lightpaths are used to connect packet nodes thus creating virtual links (vLink). Specifically, we study how lightpaths can be managed to provide enough capacity to the packet layer without detrimental effects in their Quality of Service (QoS), like added delays or packet losses, and at the same time minimize energy consumption. Such management must be as autonomous as possible to minimize human intervention. We study the autonomous operation of optical connections based on digital subcarrier multiplexing (DSCM). We propose several solutions for the autonomous operation of DSCM systems. In particular, the combination of two modules running in the optical node and in the optical transponder activate and deactivate subcarriers to adapt the capacity of the optical connection to the upper layer packet traffic. The module running in the optical node is part of our Intent-based Networking (IBN) solution and implements prediction to anticipate traffic changes. Our comprehensive study demonstrates the feasibility of DSCM autonomous operation and shows large cost savings in terms of energy consumption. In addition, our study provides a guideline to help vendors and operators to adopt the proposed solutions. The final objective targets at automating packet layer connections (PkC). Automating the capacity required by PkCs can bring further cost reduction to network operators, as it can limit the resources used at the optical layer. However, such automation requires careful design to avoid any QoS degradation, which would impact Service Level Agreement (SLA) in the case that the packet flow is related to some customer connection. We study autonomous packet flow capacity management. We apply RL techniques and propose a management lifecycle consisting of three different phases: 1) a self-tuned threshold-based approach for setting up the connection until enough data is collected, which enables understanding the traffic characteristics; 2) RL operation based on models pre-trained with generic traffic profiles; and 3) RL operation based on models trained with the observed traffic. We show that RL algorithms provide poor performance until they learn optimal policies, as well as when the traffic characteristics change over time. The proposed lifecycle provides remarkable performance from the starting of the connection and it shows the robustness while facing changes in traffic. The contribution is twofold: 1) and on the one hand, we propose a solution based on RL, which shows superior performance with respect to the solution based on prediction; and 2) because vLinks support packet connections, coordination between the intents of both layers is proposed. In this case, the actions taken by the individual PkCs are used by the vLink intent. The results show noticeable performance compared to independent vLink operation.Esta tesis doctoral se centra en la operación autónoma y confiable de redes ópticas multicapa. El primer objetivo se centra en la fiabilidad de la red óptica y propone métodos para el análisis del estado relacionados con la degradación de la calidad de la transmisión (QoT). Dicha degradación se produce por fallos en dispositivos ópticos y fibras en las redes de transporte de los operadores que no causan el corte de la señal. Comparamos el QoT estimado y medido en el transpondedor óptico mediante el uso de una herramienta de QoT basada en GNPy. Mostramos que los cambios en los valores de los parámetros de entrada del modelo QoT que representan los dispositivos ópticos pueden explicar las desviaciones y la degradación en el rendimiento de dichos dispositivos. Usamos ingeniería inversa para estimar el valor de aquellos parámetros que explican el QoT observado. Mostramos, mediante simulación, una gran anticipación en la detección, localización e identificación de fallas leves antes de afectar la red. Finalmente, validamos nuestro método de forma experimental y comprobamos la alta precisión en la estimación de los parámetros de los modelos. El segundo objetivo se centra en las redes ópticas multicapa, donde se utilizan conexiones ópticas (lightpaths) para conectar nodos de paquetes creando así enlaces virtuales (vLink). Específicamente, estudiamos cómo se pueden gestionar los lightpaths para proporcionar suficiente capacidad a la capa de paquetes sin efectos perjudiciales en su calidad de servicio (QoS), como retrasos adicionales o pérdidas de paquetes, y al mismo tiempo minimizar el consumo de energía. Estudiamos el funcionamiento autónomo de conexiones ópticas basadas en multiplexación de subportadoras digitales (DSCM) y proponemos soluciones para su funcionamiento autónomo. En particular, la combinación de dos módulos que se ejecutan en el nodo óptico y en el transpondedor óptico activan y desactivan subportadoras para adaptar la capacidad de la conexión óptica al tráfico de paquetes. El módulo que se ejecuta en el nodo óptico implementa la predicción para anticipar los cambios de tráfico. Nuestro estudio demuestra la viabilidad de la operación autónoma de DSCM y muestra un gran ahorro de consumo de energía. El objetivo final es la automatización de conexiones de capa de paquete (PkC). La automatización de la capacidad requerida por las PkC puede generar una mayor reducción de costes, ya que puede limitar los recursos utilizados en la capa óptica. Sin embargo, dicha automatización requiere un diseño cuidadoso para evitar cualquier degradación de QoS, lo que afectaría acuerdos de nivel de servicio (SLA) en el caso de que el flujo de paquetes esté relacionado con alguna conexión del cliente. Estudiamos la gestión autónoma de la capacidad del flujo de paquetes. Aplicamos RL y proponemos un ciclo de vida de gestión con tres fases: 1) un enfoque basado en umbrales auto ajustados para configurar la conexión hasta que se recopilen suficientes datos, lo que permite comprender las características del tráfico; 2) operación RL basada en modelos pre-entrenados con perfiles de tráfico genéricos; y 3) operación de RL en base a modelos entrenados con el tránsito observado. Mostramos que los algoritmos de RL ofrecen un desempeño deficiente hasta que aprenden las políticas óptimas, así cuando las características del tráfico cambian con el tiempo. El ciclo de vida propuesto proporciona un rendimiento notable desde el inicio de la conexión y muestra la robustez frente a cambios en el tráfico. La contribución es doble: 1) proponemos una solución basada en RL que muestra un rendimiento superior que la solución basada en predicción; y 2) debido a que los vLinks admiten conexiones de paquetes, se propone la coordinación entre las intenciones de ambas capas. En este caso, la intención de vLink utiliza las acciones realizadas por los PkC individuales. Los resultados muestran un rendimiento notable en comparación con la operación independiente de vLink.Postprint (published version

    Control and Communication Protocols that Enable Smart Building Microgrids

    Full text link
    Recent communication, computation, and technology advances coupled with climate change concerns have transformed the near future prospects of electricity transmission, and, more notably, distribution systems and microgrids. Distributed resources (wind and solar generation, combined heat and power) and flexible loads (storage, computing, EV, HVAC) make it imperative to increase investment and improve operational efficiency. Commercial and residential buildings, being the largest energy consumption group among flexible loads in microgrids, have the largest potential and flexibility to provide demand side management. Recent advances in networked systems and the anticipated breakthroughs of the Internet of Things will enable significant advances in demand response capabilities of intelligent load network of power-consuming devices such as HVAC components, water heaters, and buildings. In this paper, a new operating framework, called packetized direct load control (PDLC), is proposed based on the notion of quantization of energy demand. This control protocol is built on top of two communication protocols that carry either complete or binary information regarding the operation status of the appliances. We discuss the optimal demand side operation for both protocols and analytically derive the performance differences between the protocols. We propose an optimal reservation strategy for traditional and renewable energy for the PDLC in both day-ahead and real time markets. In the end we discuss the fundamental trade-off between achieving controllability and endowing flexibility

    Exploring the adoption of a conceptual data analytics framework for subsurface energy production systems: a study of predictive maintenance, multi-phase flow estimation, and production optimization

    Get PDF
    Als die Technologie weiter fortschreitet und immer stärker in der Öl- und Gasindustrie integriert wird, steht eine enorme Menge an Daten in verschiedenen Wissenschaftsdisziplinen zur Verfügung, die neue Möglichkeiten bieten, informationsreiche und handlungsorientierte Informationen zu gewinnen. Die Konvergenz der digitalen Transformation mit der Physik des Flüssigkeitsflusses durch poröse Medien und Pipeline hat die Entwicklung und Anwendung von maschinellem Lernen (ML) vorangetrieben, um weiteren Mehrwert aus diesen Daten zu gewinnen. Als Folge hat sich die digitale Transformation und ihre zugehörigen maschinellen Lernanwendungen zu einem neuen Forschungsgebiet entwickelt. Die Transformation von Brownfields in digitale Ölfelder kann bei der Energieproduktion helfen, indem verschiedene Ziele erreicht werden, einschließlich erhöhter betrieblicher Effizienz, Produktionsoptimierung, Zusammenarbeit, Datenintegration, Entscheidungsunterstützung und Workflow-Automatisierung. Diese Arbeit zielt darauf ab, ein Rahmenwerk für diese Anwendungen zu präsentieren, insbesondere durch die Implementierung virtueller Sensoren, Vorhersageanalytik mithilfe von Vorhersagewartung für die Produktionshydraulik-Systeme (mit dem Schwerpunkt auf elektrischen Unterwasserpumpen) und präskriptiven Analytik für die Produktionsoptimierung in Dampf- und Wasserflutprojekten. In Bezug auf virtuelle Messungen ist eine genaue Schätzung von Mehrphasenströmen für die Überwachung und Verbesserung von Produktionsprozessen entscheidend. Diese Studie präsentiert einen datengetriebenen Ansatz zur Berechnung von Mehrphasenströmen mithilfe von Sensormessungen in elektrischen untergetauchten Pumpbrunnen. Es wird eine ausführliche exploratorische Datenanalyse durchgeführt, einschließlich einer Ein Variablen Studie der Zielausgänge (Flüssigkeitsrate und Wasseranteil), einer Mehrvariablen-Studie der Beziehungen zwischen Eingaben und Ausgaben sowie einer Datengruppierung basierend auf Hauptkomponentenprojektionen und Clusteralgorithmen. Feature Priorisierungsexperimente werden durchgeführt, um die einflussreichsten Parameter in der Vorhersage von Fließraten zu identifizieren. Die Modellvergleich erfolgt anhand des mittleren absoluten Fehlers, des mittleren quadratischen Fehlers und des Bestimmtheitskoeffizienten. Die Ergebnisse zeigen, dass die CNN-LSTM-Netzwerkarchitektur besonders effektiv bei der Zeitreihenanalyse von ESP-Sensordaten ist, da die 1D-CNN-Schichten automatisch Merkmale extrahieren und informative Darstellungen von Zeitreihendaten erzeugen können. Anschließend wird in dieser Studie eine Methodik zur Umsetzung von Vorhersagewartungen für künstliche Hebesysteme, insbesondere bei der Wartung von Elektrischen Untergetauchten Pumpen (ESP), vorgestellt. Conventional maintenance practices for ESPs require extensive resources and manpower, and are often initiated through reactive monitoring of multivariate sensor data. Um dieses Problem zu lösen, wird die Verwendung von Hauptkomponentenanalyse (PCA) und Extreme Gradient Boosting Trees (XGBoost) zur Analyse von Echtzeitsensordaten und Vorhersage möglicher Ausfälle in ESPs eingesetzt. PCA wird als unsupervised technique eingesetzt und sein Ausgang wird weiter vom XGBoost-Modell für die Vorhersage des Systemstatus verarbeitet. Das resultierende Vorhersagemodell hat gezeigt, dass es Signale von möglichen Ausfällen bis zu sieben Tagen im Voraus bereitstellen kann, mit einer F1-Bewertung größer als 0,71 im Testset. Diese Studie integriert auch Model-Free Reinforcement Learning (RL) Algorithmen zur Unterstützung bei Entscheidungen im Rahmen der Produktionsoptimierung. Die Aufgabe, die optimalen Injektionsstrategien zu bestimmen, stellt Herausforderungen aufgrund der Komplexität der zugrundeliegenden Dynamik, einschließlich nichtlinearer Formulierung, zeitlicher Variationen und Reservoirstrukturheterogenität. Um diese Herausforderungen zu bewältigen, wurde das Problem als Markov-Entscheidungsprozess reformuliert und RL-Algorithmen wurden eingesetzt, um Handlungen zu bestimmen, die die Produktion optimieren. Die Ergebnisse zeigen, dass der RL-Agent in der Lage war, den Netto-Barwert (NPV) durch kontinuierliche Interaktion mit der Umgebung und iterative Verfeinerung des dynamischen Prozesses über mehrere Episoden signifikant zu verbessern. Dies zeigt das Potenzial von RL-Algorithmen, effektive und effiziente Lösungen für komplexe Optimierungsprobleme im Produktionsbereich zu bieten.As technology continues to advance and become more integrated in the oil and gas industry, a vast amount of data is now prevalent across various scientific disciplines, providing new opportunities to gain insightful and actionable information. The convergence of digital transformation with the physics of fluid flow through porous media and pipelines has driven the advancement and application of machine learning (ML) techniques to extract further value from this data. As a result, digital transformation and its associated machine-learning applications have become a new area of scientific investigation. The transformation of brownfields into digital oilfields can aid in energy production by accomplishing various objectives, including increased operational efficiency, production optimization, collaboration, data integration, decision support, and workflow automation. This work aims to present a framework of these applications, specifically through the implementation of virtual sensing, predictive analytics using predictive maintenance on production hydraulic systems (with a focus on electrical submersible pumps), and prescriptive analytics for production optimization in steam and waterflooding projects. In terms of virtual sensing, the accurate estimation of multi-phase flow rates is crucial for monitoring and improving production processes. This study presents a data-driven approach for calculating multi-phase flow rates using sensor measurements located in electrical submersible pumped wells. An exhaustive exploratory data analysis is conducted, including a univariate study of the target outputs (liquid rate and water cut), a multivariate study of the relationships between inputs and outputs, and data grouping based on principal component projections and clustering algorithms. Feature prioritization experiments are performed to identify the most influential parameters in the prediction of flow rates. Model comparison is done using the mean absolute error, mean squared error and coefficient of determination. The results indicate that the CNN-LSTM network architecture is particularly effective in time series analysis for ESP sensor data, as the 1D-CNN layers are capable of extracting features and generating informative representations of time series data automatically. Subsequently, the study presented herein a methodology for implementing predictive maintenance on artificial lift systems, specifically regarding the maintenance of Electrical Submersible Pumps (ESPs). Conventional maintenance practices for ESPs require extensive resources and manpower and are often initiated through reactive monitoring of multivariate sensor data. To address this issue, the study employs the use of principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to analyze real-time sensor data and predict potential failures in ESPs. PCA is utilized as an unsupervised technique and its output is further processed by the XGBoost model for prediction of system status. The resulting predictive model has been shown to provide signals of potential failures up to seven days in advance, with an F1 score greater than 0.71 on the test set. In addition to the data-driven modeling approach, The present study also in- corporates model-free reinforcement learning (RL) algorithms to aid in decision-making in production optimization. The task of determining the optimal injection strategy poses challenges due to the complexity of the underlying dynamics, including nonlinear formulation, temporal variations, and reservoir heterogeneity. To tackle these challenges, the problem was reformulated as a Markov decision process and RL algorithms were employed to determine actions that maximize production yield. The results of the study demonstrate that the RL agent was able to significantly enhance the net present value (NPV) by continuously interacting with the environment and iteratively refining the dynamic process through multiple episodes. This showcases the potential for RL algorithms to provide effective and efficient solutions for complex optimization problems in the production domain. In conclusion, this study represents an original contribution to the field of data-driven applications in subsurface energy systems. It proposes a data-driven method for determining multi-phase flow rates in electrical submersible pumped (ESP) wells utilizing sensor measurements. The methodology includes conducting exploratory data analysis, conducting experiments to prioritize features, and evaluating models based on mean absolute error, mean squared error, and coefficient of determination. The findings indicate that a convolutional neural network-long short-term memory (CNN-LSTM) network is an effective approach for time series analysis in ESPs. In addition, the study implements principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to perform predictive maintenance on ESPs and anticipate potential failures up to a seven-day horizon. Furthermore, the study applies model-free reinforcement learning (RL) algorithms to aid decision-making in production optimization and enhance net present value (NPV)

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis

    Get PDF
    Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples.Comment: 10 page

    Knowledge-defined networking : a machine learning based approach for network and traffic modeling

    Get PDF
    The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN). In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes. We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models. Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.L'aplicació de tècniques d'aprenentatge automàtic (ML) pel control i operació de xarxes informàtiques ja s'ha plantejat anteriorment per la comunitat científica. Un exemple important és "Knowledge Plane", proposat per D. Clark et al. Tot i això, aquestes propostes no s'han utilitzat ni implementat mai en aquest camp. En aquesta tesi, explorem els motius que han fet impossible l'adopció fins al present, i que ara en permeten la implementació. El principal motiu és l'adopció de dos nous paradigmes: Software-Defined Networking (SDN) i Network Analytics (NA), que permeten la utilització de tècniques d'aprenentatge automàtic en el context de control i operació de xarxes informàtiques. En aquesta tesi, es descriu aquest paradigma, que aprofita les possibilitats ofertes per SDN, per NA i per ML, i s'expliquen aplicacions en el món de la informàtica i les comunicacions on l'aplicació d'aquestes tècniques poden ser molt beneficioses. Hem anomenat a aquest paradigma Knowledge-Defined Networking (KDN). En aquest context, una de les aplicacions de ML és el modelatge de xarxes informàtiques per estimar-ne el comportament. El modelatge de xarxes és un camp de recerca important el aquest camp, i que permet, per exemple, optimitzar-ne el seu rendiment. Un dels objectius de la tesi és respondre la següent pregunta: Pot una xarxa neuronal modelar de manera acurada el comportament d'una xarxa informàtica en funció del tràfic d'entrada? Aquesta tesi es centra principalment en el modelatge del retard mig (temps entre que s'envia i es rep un paquet). També s'estudia com varia aquest retard (jitter) i el nombre de paquets perduts. Per fer-ho, s'assumeix que la xarxa és totalment desconeguda i que només es coneix la matriu de tràfic d'entrada i la matriu de rendiment com a sortida. Es fan servir diferents tècniques de ML, com ara regressors lineals i xarxes neuronals, i se n'avalua la precisió per diferents xarxes i diferents configuracions de xarxa i tràfic. Finalment, també s'estudia l'impacte de tenir múltiples fluxos entre els parells de nodes. En la tesi, també s'explora l'ús de tècniques d¿aprenentatge automàtic en altres camps relacionats amb les xarxes informàtiques. Un cas rellevant és la predicció de tràfic. Una bona estimació del tràfic permet preveure la utilització dels diversos elements de la xarxa i optimitzar-ne el seu rendiment. Les tècniques tradicionals de predicció de tràfic es basen en tècniques de sèries temporals, com ara models ARMA o ARIMA. En aquesta tesis es proposa una nova metodologia que combina un model ARIMA amb una xarxa neuronal. La xarxa neuronal millora la predicció dels valors atípics, que tenen comportament complexos i no lineals. Per fer-ho, s'incorpora a l'anàlisi l'ús d'informació externa, com ara: informació meteorològica, esdeveniments, vacances, etc. La hipòtesi principal és que el tràfic de xarxes informàtiques depèn del comportament dels usuaris finals, que a la vegada depèn de factors externs. Per això, s'avalua la precisió de la metodologia presentada fent servir dades reals d'un enllaç de sortida de la xarxa d'un campus. S'observa que el model presentat funciona bé, superant la precisió de models ARIMA estàndards. Una altra aplicació important és en el camp de Network Function Virtualization (NFV). El paradigma de NFV fa les xarxes més flexibles gràcies a l'ús de Virtual Network Functions (VNF) en lloc de dispositius específics. L'avantatge principal és la flexibilitat que ofereixen aquests elements virtuals. Per contra, l'ús de nodes virtuals augmenta la dificultat de modelar aquestes xarxes. Aquest problema es pot estudiar també mitjançant tècniques d'aprenentatge automàtic, tant per modelar com per controlar la xarxa. Com a primer pas, aquesta tesi es centra en el modelatge del comportament de VNFs treballant soles en funció del tràfic que processen. Concretament, es demostra que el consum de CPU d'una VNF es pot estimar a partir a partir de diverses característiques del tràfic d'entrada.Postprint (published version
    • …
    corecore