13 research outputs found

    QoS based aggregation in high speed IEEE802.11 wireless networks

    Get PDF
    We propose a novel frame aggregation algorithm with statistical delay guarantee for high speed IEEE802.11 networks considering link quality fluctuations. We use the concept of effective capacity to formulate frame aggregation with QoS guarantee as an optimization problem. The QoS guarantee is in the form of a target delay bound and violation probability. We apply proper approximations to derive a simple formulation, which is solved using a Proportional-Integral-Derivative (PID) controller. The proposed PID aggregation algorithm independently adapts the amount of time allowance for each link, while it needs to be implemented only at the Access Point (AP), without requiring any change to the 802.11 Medium Access Control (MAC). More importantly, the aggregator does not consider any physical layer or channel information, as it only makes use of queue level metrics, such as average queue length and link utilization, for tuning the amount of time allowance. NS-3 simulations show that our proposed scheme outperforms Earliest Deadline First (EDF) scheduling with maximum aggregation size and pure deadlinebased aggregation, both in terms of maximum number of stations and channel efficiency

    Anomaly detection in microservice environments using distributed tracing data analysis and NLP

    No full text
    In recent years DevOps and agile approaches like microservice architectures and Continuous Integration have become extremely popular given the increasing need for flexible and scalable solutions. However, several factors such as their distribution in the network, the use of different technologies, their short life, etc. make microservices prone to the occurrence of anomalous system behaviours. In addition, due to the high degree of complexity of small services, it is difficult to adequately monitor the security and behavior of microservice environments. In this work, we propose an NLP (natural language processing) based approach to detect performance anomalies in spans during a given trace, besides locating release-over-release regressions. Notably, the whole system needs no prior knowledge, which facilitates the collection of training data. Our proposed approach benefits from distributed tracing data to collect sequences of events that happened during spans. Extensive experiments on real datasets demonstrate that the proposed method achieved an F_score of 0.9759. The results also reveal that in addition to the ability to detect anomalies and release-over-release regressions, our proposed approach speeds up root cause analysis by means of implemented visualization tools in Trace Compass

    On improving deep learning trace analysis with system call arguments

    No full text
    Kernel traces are sequences of low-level events comprising a name and multiple arguments, including a timestamp, a process id, and a return value, depending on the event. Their analysis helps uncover intrusions, identify bugs, and find latency causes. However, their effectiveness is hindered by omitting the event arguments. To remedy this limitation, we introduce a general approach to learning a representation of the event names along with their arguments using both embedding and encoding. The proposed method is readily applicable to most neural networks and is task-agnostic. The benefit is quantified by conducting an ablation study on three groups of arguments: call-related, process-related, and time-related. Experiments were conducted on a novel web request dataset and validated on a second dataset collected on pre-production servers by Ciena, our partnering company. By leveraging additional information, we were able to increase the performance of two widely-used neural networks, an LSTM and a Transformer, by up to 11.3% on two unsupervised language modelling tasks. Such tasks may be used to detect anomalies, pre-train neural networks to improve their performance, and extract a contextual representation of the events.Comment: 11 pages, 11 figures, IEEE/ACM MSR 202

    Delay sensitive resource allocation over high speed IEEE802.11 wireless LANs

    No full text
    We present a novel resource allocation framework based on frame aggregation for providing a statistical Quality of Service (QoS) guarantee in high speed IEEE802.11 Wireless Local Area Networks. Considering link quality fluctuations through the concept of effective capacity, we formulate an optimization problem for resource allocation with QoS guarantees, which are expressed in terms of target delay bound and delay violation probability. Our objective is to have the access point schedule down-links at minimum resource usage, i.e., total time allowance, while their QoS is satisfied. For implementation simplicity, we then consider a surrogate optimization problem based on a few accurate queuing model approximations. We propose a novel metric that qualitatively captures the surplus resource provisioning for a particular statistical delay guarantee, and using this metric, we devise a simple-to-implement Proportional–Integral–Derivative (PID) controller achieving the optimal frame aggregation size according to the time allowance. The proposed PID algorithm independently adapts the amount of time allowance for each link, and it is implemented only at the Access Point without requiring any changes to the IEEE802.11 Medium Access Control layer. More importantly, our resource allocation algorithm does not consider any channel state information, as it only makes use of queue level information, such as the average queue length and link utilization. Via NS-3 simulations as well as real test-bed experiments with the implementation of the algorithm over commodity IEEE 802.11 devices, we demonstrate that the proposed scheme outperforms the Earliest Deadline First (EDF) scheduling with maximum aggregation size and pure deadline-based schemes, both in terms of the maximum number of stations and channel efficiency by 10–30%. These results are also verified with analytical results, which we have obtained from a queuing model based approximation of the system. Applying actual video traffic from HD MPEG4 streams in both simulations and real test-bed experiments, we also show that our proposed algorithm improves the quality of video streaming over a wireless LAN, and it outperforms EDF and deadline based schemes in terms of the video metric, Peak Signal to Noise Ratio
    corecore