18 research outputs found

    Efficient Loop Detection in Forwarding Networks and Representing Atoms in a Field of Sets

    Get PDF
    The problem of detecting loops in a forwarding network is known to be NP-complete when general rules such as wildcard expressions are used. Yet, network analyzer tools such as Netplumber (Kazemian et al., NSDI'13) or Veriflow (Khurshid et al., NSDI'13) efficiently solve this problem in networks with thousands of forwarding rules. In this paper, we complement such experimental validation of practical heuristics with the first provably efficient algorithm in the context of general rules. Our main tool is a canonical representation of the atoms (i.e. the minimal non-empty sets) of the field of sets generated by a collection of sets. This tool is particularly suited when the intersection of two sets can be efficiently computed and represented. In the case of forwarding networks, each forwarding rule is associated with the set of packet headers it matches. The atoms then correspond to classes of headers with same behavior in the network. We propose an algorithm for atom computation and provide the first polynomial time algorithm for loop detection in terms of number of classes (which can be exponential in general). This contrasts with previous methods that can be exponential, even in simple cases with linear number of classes. Second, we introduce a notion of network dimension captured by the overlapping degree of forwarding rules. The values of this measure appear to be very low in practice and constant overlapping degree ensures polynomial number of header classes. Forwarding loop detection is thus polynomial in forwarding networks with constant overlapping degree

    OptSample: A Resilient Buffer Management Policy for Robotic Systems based on Optimal Message Sampling

    Full text link
    Modern robotic systems have become an alternative to humans to perform risky or exhausting tasks. In such application scenarios, communications between robots and the control center have become one of the major problems. Buffering is a commonly used solution to relieve temporary network disruption. But the assumption that newer messages are more valuable than older ones is not true for many application scenarios such as explorations, rescue operations, and surveillance. In this paper, we proposed a novel resilient buffer management policy named OptSample. It can uniformly sampling messages and dynamically adjust the sample rate based on run-time network situation. We define an evaluation function to estimate the profit of a message sequence. Based on the function, our analysis and simulation shows that the OptSample policy can effectively prevent losing long segment of continuous messages and improve the overall profit of the received messages. We implement the proposed policy in ROS. The implementation is transparent to user and no user code need to be changed. Experimental results on several application scenarios show that the OptSample policy can help robotic systems be more resilient against network disruption

    On the Complexity of Local Graph Transformations

    Get PDF
    We consider the problem of transforming a given graph G_s into a desired graph G_t by applying a minimum number of primitives from a particular set of local graph transformation primitives. These primitives are local in the sense that each node can apply them based on local knowledge and by affecting only its 1-neighborhood. Although the specific set of primitives we consider makes it possible to transform any (weakly) connected graph into any other (weakly) connected graph consisting of the same nodes, they cannot disconnect the graph or introduce new nodes into the graph, making them ideal in the context of supervised overlay network transformations. We prove that computing a minimum sequence of primitive applications (even centralized) for arbitrary G_s and G_t is NP-hard, which we conjecture to hold for any set of local graph transformation primitives satisfying the aforementioned properties. On the other hand, we show that this problem admits a polynomial time algorithm with a constant approximation ratio

    IoTBeholder: A Privacy Snooping Attack on User Habitual Behaviors from Smart Home Wi-Fi Traffic

    Get PDF
    With the deployment of a growing number of smart home IoT devices, privacy leakage has become a growing concern. Prior work on privacy-invasive device localization, classification, and activity identification have proven the existence of various privacy leakage risks in smart home environments. However, they only demonstrate limited threats in real world due to many impractical assumptions, such as having privileged access to the user's home network. In this paper, we identify a new end-to-end attack surface using IoTBeholder, a system that performs device localization, classification, and user activity identification. IoTBeholder can be easily run and replicated on commercial off-the-shelf (COTS) devices such as mobile phones or personal computers, enabling attackers to infer user's habitual behaviors from smart home Wi-Fi traffic alone. We set up a testbed with 23 IoT devices for evaluation in the real world. The result shows that IoTBeholder has good device classification and device activity identification performance. In addition, IoTBeholder can infer the users' habitual behaviors and automation rules with high accuracy and interpretability. It can even accurately predict the users' future actions, highlighting a significant threat to user privacy that IoT vendors and users should highly concern

    Using Genetic Programming to Build Self-Adaptivity into Software-Defined Networks

    Full text link
    Self-adaptation solutions need to periodically monitor, reason about, and adapt a running system. The adaptation step involves generating an adaptation strategy and applying it to the running system whenever an anomaly arises. In this article, we argue that, rather than generating individual adaptation strategies, the goal should be to adapt the control logic of the running system in such a way that the system itself would learn how to steer clear of future anomalies, without triggering self-adaptation too frequently. While the need for adaptation is never eliminated, especially noting the uncertain and evolving environment of complex systems, reducing the frequency of adaptation interventions is advantageous for various reasons, e.g., to increase performance and to make a running system more robust. We instantiate and empirically examine the above idea for software-defined networking -- a key enabling technology for modern data centres and Internet of Things applications. Using genetic programming,(GP), we propose a self-adaptation solution that continuously learns and updates the control constructs in the data-forwarding logic of a software-defined network. Our evaluation, performed using open-source synthetic and industrial data, indicates that, compared to a baseline adaptation technique that attempts to generate individual adaptations, our GP-based approach is more effective in resolving network congestion, and further, reduces the frequency of adaptation interventions over time. In addition, we show that, for networks with the same topology, reusing over larger networks the knowledge that is learned on smaller networks leads to significant improvements in the performance of our GP-based adaptation approach. Finally, we compare our approach against a standard data-forwarding algorithm from the network literature, demonstrating that our approach significantly reduces packet loss.Comment: arXiv admin note: text overlap with arXiv:2205.0435

    Deux défis des Réseaux Logiciels : Relayage par le Nom et Vérification des Tables

    Get PDF
    The Internet changed the lives of network users: not only it affects users' habits, but it is also increasingly being shaped by network users' behavior.Several new services have been introduced during the past decades (i.e. file sharing, video streaming, cloud computing) to meet users' expectation.As a consequence, although the Internet infrastructure provides a good best-effort service to exchange information in a point-to-point fashion, this is not the principal need that todays users request. Current networks necessitate some major architectural changes in order to follow the upcoming requirements, but the experience of the past decades shows that bringing new features to the existing infrastructure may be slow.In this thesis work, we identify two main aspects of the Internet evolution: a “behavioral” aspect, which refers to a change occurred in the way users interact with the network, and a “structural” aspect, related to the evolution problem from an architectural point of view.The behavioral perspective states that there is a mismatch between the usage of the network and the actual functions it provides. While network devices implement the simple primitives of sending and receiving generic packets, users are really interested in different primitives, such as retrieving or consuming content. The structural perspective suggests that the problem of the slow evolution of the Internet infrastructure lies in its architectural design, that has been shown to be hardly upgradeable.On the one hand, to encounter the new network usage, the research community proposed the Named-data networking paradigm (NDN), which brings the content-based functionalities to network devices.On the other hand Software-defined networking (SDN) can be adopted to simplify the architectural evolution and shorten the upgrade-time thanks to its centralized software control plane, at the cost of a higher network complexity that can easily introduce some bugs. SDN verification is a novel research direction aiming to check the consistency and safety of network configurations by providing formal or empirical validation.The talk consists of two parts. In the first part, we focus on the behavioral aspect by presenting the design and evaluation of “Caesar”, a content router that advances the state-of-the-art by implementing content-based functionalities which may coexist with real network environments.In the second part, we target network misconfiguration diagnosis, and we present a framework for the analysis of the network topology and forwarding tables, which can be used to detect the presence of a loop at real-time and in real network environments.Cette thĂšse aborde des problĂšmes liĂ©s Ă  deux aspects majeurs de l’évolution d’Internet : l’aspect >, qui correspond aux nouvelles interactions entre les utilisateurs et le rĂ©seau, et l’aspect >, liĂ© aux changements d’Internet d’un point de vue architectural.Le manuscrit est composĂ© d’un chapitre introductif qui donne les grandes lignes de recherche de ce travail de thĂšse, suivi d’un chapitre consacrĂ© Ă  la description de l’état de l’art sur les deux aspects mentionnĂ©s ci-dessus. Parmi les solutions proposĂ©es par la communautĂ© scientifique pour s'adapter Ă  l’évolution d’Internet, deux nouveaux paradigmes rĂ©seaux sont particuliĂšrement dĂ©crits : Information- Centric Networking (ICN) et Software-Defined Networking (SDN).La thĂšse continue avec la proposition de >, un dispositif rĂ©seau, inspirĂ© par ICN, capable de gĂ©rer la distribution de contenus Ă  partir de primitives de routage basĂ©es sur le nom des donnĂ©es et non les adresses des serveurs. Caesar est prĂ©sentĂ© dans deux chapitres, qui dĂ©crivent l’architecture et deux des principaux modules : le relayage et la gestion de la traçabilitĂ© des requĂȘtes.La suite du manuscrit dĂ©crit un outil mathĂ©matique pour la dĂ©tection efficace de boucles dans un rĂ©seau SDN d’un point de vue thĂ©orique. Les amĂ©liorations de l’algorithme proposĂ© par rapport Ă  l’état de l’art sont discutĂ©es.La thĂšse se conclue par un rĂ©sumĂ© des principaux rĂ©sultats obtenus et une prĂ©sentation des travaux en cours et futurs

    Recent Advances in Indoor Localization Systems and Technologies

    Get PDF
    Despite the enormous technical progress seen in the past few years, the maturity of indoor localization technologies has not yet reached the level of GNSS solutions. The 23 selected papers in this book present the recent advances and new developments in indoor localization systems and technologies, propose novel or improved methods with increased performance, provide insight into various aspects of quality control, and also introduce some unorthodox positioning methods

    TĂ€tigkeitsbericht 2017-2019/20

    Get PDF
    corecore