8 research outputs found

    Optimization of Handover, Survivability, Multi-Connectivity and Secure Slicing in 5G Cellular Networks using Matrix Exponential Models and Machine Learning

    Get PDF
    Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 173-194)Dissertation (Ph.D.)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 2022This works proposes optimization of cellular handovers, cellular network survivability modeling, multi-connectivity and secure network slicing using matrix exponentials and machine learning techniques. We propose matrix exponential (ME) modeling of handover arrivals with the potential to much more accurately characterize arrivals and prioritize resource allocation for handovers, especially handovers for emergency or public safety needs. With the use of a ‘B’ matrix for representing a handover arrival, we have a rich set of dimensions to model system handover behavior. We can study multiple parameters and the interactions between system events along with the user mobility, which would trigger a handoff in any given scenario. Additionally, unlike any traditional handover improvement scheme, we develop a ‘Deep-Mobility’ model by implementing a deep learning neural network (DLNN) to manage network mobility, utilizing in-network deep learning and prediction. We use the radio and the network key performance indicators (KPIs) to train our model to analyze network traffic and handover requirements. Cellular network design must incorporate disaster response, recovery and repair scenarios. Requirements for high reliability and low latency often fail to incorporate network survivability for mission critical and emergency services. Our Matrix Exponential (ME) model shows how survivable networks can be designed based on controlling numbers of crews, times taken for individual repair stages, and the balance between fast and slow repairs. Transient and the steady state representations of system repair models, namely, fast and slow repairs for networks consisting of multiple repair crews have been analyzed. Failures are exponentially modeled as per common practice, but ME distributions describe the more complex recovery processes. In some mission critical communications, the availability requirements may exceed five or even six nines (99.9999%). To meet such a critical requirement and minimize the impact of mobility during handover, a Fade Duration Outage Probability (FDOP) based multiple radio link connectivity handover method has been proposed. By applying such a method, a high degree of availability can be achieved by utilizing two or more uncorrelated links based on minimum FDOP values. Packet duplication (PD) via multi-connectivity is a method of compensating for lost packets on a wireless channel. Utilizing two or more uncorrelated links, a high degree of availability can be attained with this strategy. However, complete packet duplication is inefficient and frequently unnecessary. We provide a novel adaptive fractional packet duplication (A-FPD) mechanism for enabling and disabling packet duplication based on a variety of parameters. We have developed a ‘DeepSlice’ model by implementing Deep Learning (DL) Neural Network to manage network load efficiency and network availability, utilizing in-network deep learning and prediction. Our Neural Network based ‘Secure5G’ Network Slicing model will proactively detect and eliminate threats based on incoming connections before they infest the 5G core network elements. These will enable the network operators to sell network slicing as-a-service to serve diverse services efficiently over a single infrastructure with higher level of security and reliability.Introduction -- Matrix exponential and deep learning neural network modeling of cellular handovers -- Survivability modeling in cellular networks -- Multi connectivity based handover enhancement and adaptive fractional packet duplication in 5G cellular networks -- Deepslice and Secure5G: a deep learning framework towards an efficient, reliable and secure network slicing in 5G networks -- Conclusion and future scop

    Healthcare seeking behaviour as a link between tuberculosis and socioeconomic factors

    Get PDF
    Socioeconomic barriers to tuberculosis care-seeking and costs due to care-seeking lead to unfavourable treatment, epidemiological and economic outcomes. Especially in the post-2015 era, socioeconomic interventions for tuberculosis control are receiving increasing attention. In Taiwan, the National Health Insurance programme minimises out-of-pocket expenses for patients, but important delays to tuberculosis treatment still exist. Based on the population and tuberculosis epidemiology in Taiwan, I develop an analysis for profiling the efficacy of tuberculosis care provision and patients' care-seeking pathways. The results highlight that the interrupted tuberculosis evaluation processes and low diagnostic capacity in small local hospitals stands as key causes of extended delays to treatment, unfavourable outcomes, and costs. I analyse socioeconomic status (SES) of employment, vulnerability, and residential contexts, to identify risk factors for different aspects of care-seeking. To link the care-seeking pathways to the nationwide tuberculosis epidemiology, I develop a data-driven hybrid simulation model. The model integrates the advantages of agent-based approaches in representing detail, and equation-based approaches in simplicity and low computational cost. This approach makes feasible Monte-Carlo experiments for robust inferences without over-simplifying the care-seeking details of interest. By comparing the hybrid model simulations with a corresponding equation-based comparator, I confirm its validity. I considered interventions to improve universal health coverage by decentralising tuberculosis diagnostic capacity. I modelled specific interventions increasing the coverage of tuberculosis diagnostic capacity using various SES-targeted scale-up strategies. These show potential benefits in terms of reducing dropouts and reducing the tuberculosis burden, without significant increases in the inequality of care-seeking costs. I suggest considering additional SES variables such as education, health illiteracy, and social segregation to find other care-seeking barriers. Further investigations of SES-related interventions against tuberculosis, including formal impact and health economic evaluation, should be pursued in collaboration with policymakers able to advise on feasibility and patients able to advise on acceptability

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    Efficient duration and hierarchical modeling for human activity recognition

    Get PDF
    A challenge in building pervasive and smart spaces is to learn and recognize human activities of daily living (ADLs). In this paper, we address this problem and argue that in dealing with ADLs, it is beneficial to exploit both their typical duration patterns and inherent hierarchical structures. We exploit efficient duration modeling using the novel Coxian distribution to form the Coxian hidden semi-Markov model (CxHSMM) and apply it to the problem of learning and recognizing ADLs with complex temporal dependencies.The Coxian duration model has several advantages over existing duration parameterization using multinomial or exponential family distributions, including its denseness in the space of non negative distributions, low number of parameters, computational efficiency and the existence of closed-form estimation solutions. Further we combine both hierarchical and duration extensions of the hidden Markov model (HMM) to form the novel switching hidden semi-Markov model (SHSMM), and empirically compare its performance with existing models. The model can learn what an occupant normally does during the day from unsegmented training data and then perform online activity classification, segmentation and abnormality detection. Experimental results show that Coxian modeling outperforms a range of baseline models for the task of activity segmentation. We also achieve arecognition accuracy competitive to the current state-of-the-art multinomial duration model, while gaining a significant reduction in computation. Furthermore, cross-validation model selection on the number of phases K in the Coxian indicates that only a small Kis required to achieve the optimal performance. Finally, our models are further tested in a more challenging setting in which the tracking is often lost and the activities considerably overlap. With a small amount of labels supplied during training in a partially supervised learning mode, our models are again able to deliver reliable performance, again with a small number of phases, making our proposed framework an attractive choice for activity modeling

    Performance Evaluation for Hybrid Architectures

    Get PDF
    In this dissertation we discuss methologies for estimating the performance of applications on hybrid architectures, systems that include various types of computing resources (e.g. traditional general-purpose processors, chip multiprocessors, reconfigurable hardware). A common use of hybrid architectures will be to deploy coarse pipeline stages of application on suitable compute units with communication path for transferring data. The first problem we focus on relates to the sizing the data queues between the different processing elements of an hybrid system. Much of the discussion centers on our analytical models that can be used to derive performance metrics of interest such as, throughput and stalling probability for networks of processing elements with finite data buffering between them. We then discuss to the reliability of performance models. There we start by presenting scenarios where our analytical model is reliable, and introduce tests that can detect their inapplicability. As we transition into the question of reliability of performance models, we access the accuracy and applicability of various evaluation methods. We present results from our experiments to show the need for measuring and accounting for operating system effects in architectural modeling and estimation

    Effective task assignment strategies for distributed systems under highly variable workloads

    Get PDF
    Heavy-tailed workload distributions are commonly experienced in many areas of distributed computing. Such workloads are highly variable, where a small number of very large tasks make up a large proportion of the workload, making the load very hard to distribute effectively. Traditional task assignment policies are ineffective under these conditions as they were formulated based on the assumption of an exponentially distributed workload. Size-based task assignment policies have been proposed to handle heavy-tailed workloads, but their applications are limited by their static nature and assumption of prior knowledge of a task's service requirement. This thesis analyses existing approaches to load distribution under heavy-tailed workloads, and presents a new generalised task assignment policy that significantly improves performance for many distributed applications, by intelligently addressing the negative effects on performance that highly variable workloads cause. Many problems associated with the modelling and optimisations of systems under highly variable workloads were then addressed by a novel technique that approximated these workloads with simpler mathematical representations, without losing any of their pertinent original properties. Finally, we obtain advance queuing metrics (such as the variance of key measurements like waiting time and slowdown that are difficult to obtain analytically) through rigorous simulation
    corecore