450 research outputs found

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    A platform for aggregate computing over LoRaWAN network

    Get PDF
    Recent technological developments led to increased computational and networking capabilities of everyday objects. This situation resulted in an increase in number of devices embedded in cyber-physical systems. In order to simplify the design and management of pervasive and heterogeneous systems like these, there is need for new high-level paradigms able to capture concerns like heterogeneity and location of the devices. Aggregate computing is one of these: it proposes to describe the global behaviour of a system by managing global spatio-temporal data structures, and abstracting details of its physical network, as topology and communication technology. A related problem with the design of complex pervasive systems is verifying their behaviour in a real scenario, because it is generally expensive, complicated, and not always possible in practice. A partial solution to the problem is testing this kind of systems using simulations. Even though simulations execute a system model, it should be noted that such model is only a system abstraction; however they can still provide reliable insights on the system behaviour and performance. In the Internet-of-Things context, an emergent enabling communication technology for situated devices is LoRaWAN. LoRaWAN is a network protocol that allows long range communications and low energy consumption, at the cost of limited data rate. There are currently no platforms for aggregated languages that support their execution over LoRaWAN networks. Moreover nowadays there are no simulators supporting real simulation of aggregate system over LoRaWAN networks: however there are simulators supporting aggregate applications or LoRaWAN networks. The contribution of this thesis is to provide a platform that supports the LoRaWAN abstractions as backend of an aggregate computing system, and join it to the existing DingNet simulator achieving a platform allowing aggregate applications simulations over realistic LoRaWAN networks

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Knowledge-defined networking : a machine learning based approach for network and traffic modeling

    Get PDF
    The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN). In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes. We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models. Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.L'aplicació de tècniques d'aprenentatge automàtic (ML) pel control i operació de xarxes informàtiques ja s'ha plantejat anteriorment per la comunitat científica. Un exemple important és "Knowledge Plane", proposat per D. Clark et al. Tot i això, aquestes propostes no s'han utilitzat ni implementat mai en aquest camp. En aquesta tesi, explorem els motius que han fet impossible l'adopció fins al present, i que ara en permeten la implementació. El principal motiu és l'adopció de dos nous paradigmes: Software-Defined Networking (SDN) i Network Analytics (NA), que permeten la utilització de tècniques d'aprenentatge automàtic en el context de control i operació de xarxes informàtiques. En aquesta tesi, es descriu aquest paradigma, que aprofita les possibilitats ofertes per SDN, per NA i per ML, i s'expliquen aplicacions en el món de la informàtica i les comunicacions on l'aplicació d'aquestes tècniques poden ser molt beneficioses. Hem anomenat a aquest paradigma Knowledge-Defined Networking (KDN). En aquest context, una de les aplicacions de ML és el modelatge de xarxes informàtiques per estimar-ne el comportament. El modelatge de xarxes és un camp de recerca important el aquest camp, i que permet, per exemple, optimitzar-ne el seu rendiment. Un dels objectius de la tesi és respondre la següent pregunta: Pot una xarxa neuronal modelar de manera acurada el comportament d'una xarxa informàtica en funció del tràfic d'entrada? Aquesta tesi es centra principalment en el modelatge del retard mig (temps entre que s'envia i es rep un paquet). També s'estudia com varia aquest retard (jitter) i el nombre de paquets perduts. Per fer-ho, s'assumeix que la xarxa és totalment desconeguda i que només es coneix la matriu de tràfic d'entrada i la matriu de rendiment com a sortida. Es fan servir diferents tècniques de ML, com ara regressors lineals i xarxes neuronals, i se n'avalua la precisió per diferents xarxes i diferents configuracions de xarxa i tràfic. Finalment, també s'estudia l'impacte de tenir múltiples fluxos entre els parells de nodes. En la tesi, també s'explora l'ús de tècniques d¿aprenentatge automàtic en altres camps relacionats amb les xarxes informàtiques. Un cas rellevant és la predicció de tràfic. Una bona estimació del tràfic permet preveure la utilització dels diversos elements de la xarxa i optimitzar-ne el seu rendiment. Les tècniques tradicionals de predicció de tràfic es basen en tècniques de sèries temporals, com ara models ARMA o ARIMA. En aquesta tesis es proposa una nova metodologia que combina un model ARIMA amb una xarxa neuronal. La xarxa neuronal millora la predicció dels valors atípics, que tenen comportament complexos i no lineals. Per fer-ho, s'incorpora a l'anàlisi l'ús d'informació externa, com ara: informació meteorològica, esdeveniments, vacances, etc. La hipòtesi principal és que el tràfic de xarxes informàtiques depèn del comportament dels usuaris finals, que a la vegada depèn de factors externs. Per això, s'avalua la precisió de la metodologia presentada fent servir dades reals d'un enllaç de sortida de la xarxa d'un campus. S'observa que el model presentat funciona bé, superant la precisió de models ARIMA estàndards. Una altra aplicació important és en el camp de Network Function Virtualization (NFV). El paradigma de NFV fa les xarxes més flexibles gràcies a l'ús de Virtual Network Functions (VNF) en lloc de dispositius específics. L'avantatge principal és la flexibilitat que ofereixen aquests elements virtuals. Per contra, l'ús de nodes virtuals augmenta la dificultat de modelar aquestes xarxes. Aquest problema es pot estudiar també mitjançant tècniques d'aprenentatge automàtic, tant per modelar com per controlar la xarxa. Com a primer pas, aquesta tesi es centra en el modelatge del comportament de VNFs treballant soles en funció del tràfic que processen. Concretament, es demostra que el consum de CPU d'una VNF es pot estimar a partir a partir de diverses característiques del tràfic d'entrada.Postprint (published version

    Rule-based multi-level modeling of cell biological systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Proteins, individual cells, and cell populations denote different levels of an organizational hierarchy, each of which with its own dynamics. Multi-level modeling is concerned with describing a system at these different levels and relating their dynamics. Rule-based modeling has increasingly attracted attention due to enabling a concise and compact description of biochemical systems. In addition, it allows different methods for model analysis, since more than one semantics can be defined for the same syntax.</p> <p>Results</p> <p>Multi-level modeling implies the hierarchical nesting of model entities and explicit support for downward and upward causation between different levels. Concepts to support multi-level modeling in a rule-based language are identified. To those belong rule schemata, hierarchical nesting of species, assigning attributes and solutions to species at each level and preserving content of nested species while applying rules. Further necessities are the ability to apply rules and flexibly define reaction rate kinetics and constraints on nested species as well as species that are nested within others. An example model is presented that analyses the interplay of an intracellular control circuit with states at cell level, its relation to cell division, and connections to intercellular communication within a population of cells. The example is described in ML-Rules - a rule-based multi-level approach that has been realized within the plug-in-based modeling and simulation framework JAMES II.</p> <p>Conclusions</p> <p>Rule-based languages are a suitable starting point for developing a concise and compact language for multi-level modeling of cell biological systems. The combination of nesting species, assigning attributes, and constraining reactions according to these attributes is crucial in achieving the desired expressiveness. Rule schemata allow a concise and compact description of complex models. As a result, the presented approach facilitates developing and maintaining multi-level models that, for instance, interrelate intracellular and intercellular dynamics.</p

    Queueing-Theoretic End-to-End Latency Modeling of Future Wireless Networks

    Get PDF
    The fifth generation (5G) of mobile communication networks is envisioned to enable a variety of novel applications. These applications demand requirements from the network, which are diverse and challenging. Consequently, the mobile network has to be not only capable to meet the demands of one of these applications, but also be flexible enough that it can be tailored to different needs of various services. Among these new applications, there are use cases that require low latency as well as an ultra-high reliability, e.g., to ensure unobstructed production in factory automation or road safety for (autonomous) transportation. In these domains, the requirements are crucial, since violating them may lead to financial or even human damage. Hence, an ultra-low probability of failure is necessary. Based on this, two major questions arise that are the motivation for this thesis. First, how can ultra-low failure probabilities be evaluated, since experiments or simulations would require a tremendous number of runs and, thus, turn out to be infeasible. Second, given a network that can be configured differently for different applications through the concept of network slicing, which performance can be expected by different parameters and what is their optimal choice, particularly in the presence of other applications. In this thesis, both questions shall be answered by appropriate mathematical modeling of the radio interface and the radio access network. Thereby the aim is to find the distribution of the (end-to-end) latency, allowing to extract stochastic measures such as the mean, the variance, but also ultra-high percentiles at the distribution tail. The percentile analysis eventually leads to the desired evaluation of worst-case scenarios at ultra-low probabilities. Therefore, the mathematical tool of queuing theory is utilized to study video streaming performance and one or multiple (low-latency) applications. One of the key contributions is the development of a numeric algorithm to obtain the latency of general queuing systems for homogeneous as well as for prioritized heterogeneous traffic. This provides the foundation for analyzing and improving end-to-end latency for applications with known traffic distributions in arbitrary network topologies and consisting of one or multiple network slices.Es wird erwartet, dass die fünfte Mobilfunkgeneration (5G) eine Reihe neuartiger Anwendungen ermöglichen wird. Allerdings stellen diese Anwendungen sowohl sehr unterschiedliche als auch überaus herausfordernde Anforderungen an das Netzwerk. Folglich muss das mobile Netz nicht nur die Voraussetzungen einer einzelnen Anwendungen erfüllen, sondern auch flexibel genug sein, um an die Vorgaben unterschiedlicher Dienste angepasst werden zu können. Ein Teil der neuen Anwendungen erfordert hochzuverlässige Kommunikation mit niedriger Latenz, um beispielsweise unterbrechungsfreie Produktion in der Fabrikautomatisierung oder Sicherheit im (autonomen) Straßenverkehr zu gewährleisten. In diesen Bereichen ist die Erfüllung der gestellten Anforderungen besonders kritisch, da eine Verletzung finanzielle oder sogar personelle Schäden nach sich ziehen könnte. Eine extrem niedrige Ausfallwahrscheinlichkeit ist daher von größter Wichtigkeit. Daraus ergeben sich zwei wesentliche Fragestellungen, welche diese Arbeit motivieren. Erstens, wie können extrem niedrige Ausfallwahrscheinlichkeiten evaluiert werden. Ihr Nachweis durch Experimente oder Simulationen würde eine extrem große Anzahl an Durchläufen benötigen und sich daher als nicht realisierbar herausstellen. Zweitens, welche Performanz ist für ein gegebenes Netzwerk durch unterschiedliche Konfigurationen zu erwarten und wie kann die optimale Konfiguration gewählt werden. Diese Frage ist insbesondere dann interessant, wenn mehrere Anwendungen gleichzeitig bedient werden und durch sogenanntes Slicing für jeden Dienst unterschiedliche Konfigurationen möglich sind. In dieser Arbeit werden beide Fragen durch geeignete mathematische Modellierung der Funkschnittstelle sowie des Funkzugangsnetzes (Radio Access Network) adressiert. Mithilfe der Warteschlangentheorie soll die stochastische Verteilung der (Ende-zu-Ende-) Latenz bestimmt werden. Dies liefert unterschiedliche stochastische Metriken, wie den Erwartungswert, die Varianz und insbesondere extrem hohe Perzentile am oberen Rand der Verteilung. Letztere geben schließlich Aufschluss über die gesuchten schlimmsten Fälle, die mit sehr geringer Wahrscheinlichkeit eintreten können. In der Arbeit werden Videostreaming und ein oder mehrere niedriglatente Anwendungen untersucht. Zu den wichtigsten Beiträgen zählt dabei die Entwicklung einer numerischen Methode, um die Latenz in allgemeinen Warteschlangensystemen für homogenen sowie für priorisierten heterogenen Datenverkehr zu bestimmen. Dies legt die Grundlage für die Analyse und Verbesserung von Ende-zu-Ende-Latenz für Anwendungen mit bekannten Verkehrsverteilungen in beliebigen Netzwerktopologien mit ein oder mehreren Slices

    An Artificial Neural Network-based Decision-Support System for Integrated Network Security

    Get PDF
    As large-scale Cyber attacks become more sophisticated, local network defenders should employ strength-in-numbers to achieve mission success. Group collaboration reduces individual efforts to analyze and assess network traffic. Network defenders must evolve from an isolated defense in sector policy and move toward a collaborative strength-in-numbers defense policy that rethinks traditional network boundaries. Such a policy incorporates a network watch ap-proach to global threat defense, where local defenders share the occurrence of local threats in real-time across network security boundaries, increases Cyber Situation Awareness (CSA) and provides localized decision-support. A single layer feed forward artificial neural network (ANN) is employed as a global threat event recommender system (GTERS) that learns expert-based threat mitigation decisions. The system combines the occurrence of local threat events into a unified global event situation, forming a global policy that allows the flexibility of various local policy interpretations of the global event. Such flexibility enables a Linux based network defender to ignore windows-specific threats while focusing on Linux threats in real-time. In this thesis, the GTERS is shown to effectively encode an arbitrary policy with 99.7% accuracy based on five threat-severity levels and achieves a generalization accuracy of 96.35% using four distinct participants and 9-fold cross-validation

    Sensor based real-time process monitoring for ultra-precision manufacturing processes with non-linearity and non-stationarity

    Get PDF
    This research investigates methodologies for real-time process monitoring in ultra-precision manufacturing processes, specifically, chemical mechanical planarization (CMP) and ultra-precision machining (UPM), are investigated in this dissertation.The three main components of this research are as follows: (1) developing a predictive modeling approaches for early detection of process anomalies/change points, (2) devising approaches that can capture the non-Gaussian and non-stationary characteristics of CMP and UPM processes, and (3) integrating multiple sensor data to make more reliable process related decisions in real-time.In the first part, we establish a quantitative relationship between CMP process performance, such as material removal rate (MRR) and data acquired from wireless vibration sensors. Subsequently, a non-linear sequential Bayesian analysis is integrated with decision theoretic concepts for detection of CMP process end-point for blanket copper wafers. Using this approach, CMP polishing end-point was detected within a 5% error rate.Next, a non-parametric Bayesian analytical approach is utilized to capture the inherently complex, non-Gaussian, and non-stationary sensor signal patterns observed in CMP process. An evolutionary clustering analysis, called Recurrent Nested Dirichlet Process (RNDP) approach is developed for monitoring CMP process changes using MEMS vibration signals. Using this novel signal analysis approach, process drifts are detected within 20 milliseconds and is assessed to be 3-7 times faster than traditional SPC charts. This is very beneficial to the industry from an application standpoint, because, wafer yield losses will be mitigated to a great extent, if the onset of CMP process drifts can be detected timely and accurately.Lastly, a non-parametric Bayesian modeling approach, termed Dirichlet Process (DP) is combined with a multi-level hierarchical information fusion technique for monitoring of surface finish in UPM process. Using this approach, signal patterns from six different sensors (three axis vibration and force) are integrated based on information fusion theory. It was observed that using experimental UPM sensor data that process decisions based on the multiple sensor information fusion approach were 15%-30% more accurate than the decisions from individual sensors. This will enable more accurate and reliable estimation of process conditions in ultra-precision manufacturing applications
    • …
    corecore