441 research outputs found

    A Study over Registration Server System Simulation

    Get PDF
    This paper is a continuous study of the registration server system using a previous created real-time simulation application for my working product- T-Mobile Digits’ registration server system - an Enterprise-level solution ensembles Skype for Business, but with a sizable testing user pool. As a standard system design normally includes the hardware infrastructure, computational logics and its own assigned rules/configures, and as all the complex system, a well-set server structure is the kernel for no matter testing or commercial purpose. The challenges are real and crucial for both business success besides the concerns of access capability and security. It will begin with the discussion of the server-side architecture and the current functional workflows. However, the problematic project is facing stalling issues of the registration system whenever the automation tests deploys, or the pressure tests are happening. The project norms are based on my previous study, current study after architecture refactor and enterprise server function reporting tool: Splunk. I will create a new hypothesis of the mathematical model/formula towards the new architecture and will retrieve the most of simulation skeleton formed from last semester by introducing new variables and new model for the performance comparisons. This project will finalize the study from the last semester and evaluate the server performance under the new architecture. Also, I will try to explore and compare the performances before and after the structure level refactors in the server architecture design, which is in achieving to provide comparison to the system architects or other stakeholders and help them to explore the possible improvements of the current registration server system. The ultimate goal of the study remains the same: I am seeking opportunities to analyze over current problematic flows and achieving making betterments to the product and I expect to make theoretical suggestions to better for the current workflow and logic structure of the current registration server system so that the server would be more durable for automation tests and malicious attacks

    Packet level measurement over wireless access

    Get PDF
    PhDPerformance Measurement of the IP packet networks mainly comprise of monitoring the network performance in terms of packet losses and delays. If used appropriately, these network parameters (i.e. delay, loss and bandwidth etc) can indicate the performance status of the network and they can be used in fault and performance monitoring, network provisioning, and traffic engineering. Globally, there is a growing need for accurate network measurement to support the commercial use of IP networks. In wireless networks, transmission losses and communication delays strongly affect the performance of the network. Compared to wired networks, wireless networks experience higher levels of data dropouts, and corruption due to issues of channel fading, noise, interference and mobility. Performance monitoring is a vital element in the commercial future of broadband packet networking and the ability to guarantee quality of service in such networks is implicit in Service Level Agreements. Active measurements are performed by injecting probes, and this is widely used to determine the end to end performance. End to end delay in wired networks has been extensively investigated, and in this thesis we report on the accuracy achieved by probing for end to end delay over a wireless scenario. We have compared two probing techniques i.e. Periodic and Poisson probing, and estimated the absolute error for both. The simulations have been performed for single hop and multi- hop wireless networks. In addition to end to end latency, Active measurements have also been performed for packet loss rate. The simulation based analysis has been tried under different traffic scenarios using Poisson Traffic Models. We have sampled the user traffic using Periodic probing at different rates for single hop and multiple hop wireless scenarios. 5 Active probing becomes critical at higher values of load forcing the network to saturation much earlier. We have evaluated the impact of monitoring overheads on the user traffic, and show that even small amount of probing overhead in a wireless medium can cause large degradation in network performance. Although probing at high rate provides a good estimation of delay distribution of user traffic with large variance yet there is a critical tradeoff between the accuracy of measurement and the packet probing overhead. Our results suggest that active probing is highly affected by probe size, rate, pattern, traffic load, and nature of shared medium, available bandwidth and the burstiness of the traffic

    CWI Self-evaluation 1999-2004

    Get PDF

    Non-Intrusive Measurement in Packet Networks and its Applications

    Get PDF
    PhDNetwork measurementis becoming increasingly important as a meanst o assesst he performanceo f packet networks. Network performance can involve different aspects such as availability, link failure detection etc, but in this thesis, we will focus on Quality of Service (QoS). Among the metrics used to define QoS, we are particularly interested in end-to-end delay performance. Recently, the adoption of Service Level Agreements (SLA) between network operators and their customersh as becomea major driving force behind QoS measurementm: easurementi s necessaryt o produce evidence of fulfilment of the requirements specified in the SLA. Many attempts to do QoS based packet level measurement have been based on Active Measurement, in which the properties of the end-to-end path are tested by adding testing packets generated from the sending end. The main drawback of active probing is its intrusive nature which causes extraburden on the network, and has been shown to distort the measured condition of the network. The other category of network measurement is known as Passive Measurement. In contrast to Active Measurement, there are no testing packets injected into the network, therefore no intrusion is caused. The proposed applications using Passive Measurement are currently quite limited. But Passive Measurement may offer the potential for an entirely different perspective compared with Active Measurements In this thesis, the objective is to develop a measurement methodology for the end-to-end delay performance based on Passive Measurement. We assume that the nodes in a network domain are accessible.F or example, a network domain operatedb y a single network operator. The novel idea is to estimate the local per-hop delay distribution based on a hybrid approach (model and measurement-based)W. ith this approach,t he storagem easurementd ata requirement can be greatly alleviated and the overhead put in each local node can be minimized, so maintaining the fast switching operation in a local switcher or router. Per-hop delay distributions have been widely used to infer QoS at a single local node. However, the end-to-end delay distribution is more appropriate when quantifying delays across an end-to-end path. Our approach is to capture every local node's delay distribution, and then the end-to-end delay distribution can be obtained by convolving the estimated delay distributions. In this thesis, our algorithm is examined by comparing the proximity of the actual end-to-end delay distribution with the estimated one obtained by our measurement method under various conditions. e. g. in the presence of Markovian or Power-law traffic. Furthermore, the comparison between Active Measurement and our scheme is also studied. 2 Network operators may find our scheme useful when measuring the end-to-end delay performance. As stated earlier, our scheme has no intrusive effect. Furthermore, the measurement result in the local node can be re-usable to deduce other paths' end-to-end delay behaviour as long as this local node is included in the path. Thus our scheme is more scalable compared with active probing

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT
    corecore