988,401 research outputs found

    STOCHASTIC MODELING AND TIME-TO-EVENT ANALYSIS OF VOIP TRAFFIC

    Get PDF
    Voice over IP (VoIP) systems are gaining increased popularity due to the cost effectiveness, ease of management, and enhanced features and capabilities. Both enterprises and carriers are deploying VoIP systems to replace their TDM-based legacy voice networks. However, the lack of engineering models for VoIP systems has been realized by many researchers, especially for large-scale networks. The purpose of traffic engineering is to minimize call blocking probability and maximize resource utilization. The current traffic engineering models are inherited from the legacy PSTN world, and these models fall short from capturing the characteristics of new traffic patterns. The objective of this research is to develop a traffic engineering model for modern VoIP networks. We studied the traffic on a large-scale VoIP network and collected several billions of call information. Our analysis shows that the traditional traffic engineering approach based on the Poisson call arrival process and exponential holding time fails to capture the modern telecommunication systems accurately. We developed a new framework for modeling call arrivals as a non-homogeneous Poisson process, and we further enhanced the model by providing a Gaussian approximation for the cases of heavy traffic condition on large-scale networks. In the second phase of the research, we followed a new time-to-event survival analysis approach to model call holding time as a generalized gamma distribution and we introduced a Call Cease Rate function to model the call durations. The modeling and statistical work of the Call Arrival model and the Call Holding Time model is constructed, verified and validated using hundreds of millions of real call information collected from an operational VoIP carrier network. The traffic data is a mixture of residential, business, and wireless traffic. Therefore, our proposed models can be applied to any modern telecommunication system. We also conducted sensitivity analysis of model parameters and performed statistical tests on the robustness of the models’ assumptions. We implemented the models in a new simulation-based traffic engineering system called VoIP Traffic Engineering Simulator (VSIM). Advanced statistical and stochastic techniques were used in building VSIM system. The core of VSIM is a simulation system that consists of two different simulation engines: the NHPP parametric simulation engine and the non-parametric simulation engine. In addition, VSIM provides several subsystems for traffic data collection, processing, statistical modeling, model parameter estimation, graph generation, and traffic prediction. VSIM is capable of extracting traffic data from a live VoIP network, processing and storing the extracted information, and then feeding it into one of the simulation engines which in turn provides resource optimization and quality of service reports

    DeepTSP: Deep traffic state prediction model based on large-scale empirical data

    Get PDF
    Real-time traffic state (e.g., speed) prediction is an essential component for traffic control and management in an urban road network. How to build an effective large-scale traffic state prediction system is a challenging but highly valuable problem. This study focuses on the construction of an effective solution designed for spatio-temporal data to predict the traffic state of large-scale traffic systems. In this study, we first summarize the three challenges faced by large-scale traffic state prediction, i.e., scale, granularity, and sparsity. Based on the domain knowledge of traffic engineering, the propagation of traffic states along the road network is theoretically analyzed, which are elaborated in aspects of the temporal and spatial propagation of traffic state, traffic state experience replay, and multi-source data fusion. A deep learning architecture, termed as Deep Traffic State Prediction (DeepTSP), is therefore proposed to address the current challenges in traffic state prediction. Experiments demonstrate that the proposed DeepTSP model can effectively predict large-scale traffic states

    The MegaM@Rt2 ECSEL project: MegaModelling at runtime-scalable model-based framework for continuous development and runtime validation of complex systems

    Get PDF
    A major challenge for the European electronic components and systems (ECS) industry is to increase productivity and reduce costs while ensuring safety and quality. Model-Driven Engineering (MDE) principles have already shown valuable capabilities for the development of ECSs but still need to scale to support real-world scenarios implied by the full deployment and use of complex electronic systems, such as Cyber-Physical Systems, and real-time systems. Moreover, maintaining efficient traceability, integration and communication between fundamental stages of the development lifecycle (i.e., design time and runtime) is another challenge to the scalability of MDE tools and techniques. This paper presents “MegaModelling at runtime – Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), an ECSEL–JU project whose main goal is to address the above mentioned challenges. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2aims to deliver a framework of tools and methods for: (i) system engineering/design and continuous development, (ii) related runtime analysis, and (iii) global model and traceability management.This project has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under grant agreement No. 737494. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and from Sweden, France, Spain, Italy, Finland & Czech Republic

    Scaling Sparse Matrices for Optimization Algorithms

    Get PDF
    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performance of a large scale interior point algorithm significantly. We saw a factor of 10*3* improvements in condition number estimates. In this paper, given our experience with optimization problems from a variety of application backgrounds like economics, finance, engineering, planning etc., we examine the theoretical basis for scaling while solving the linear systems. Our goal is to develop reasonably "good" scaling schemes with sound theoretical basis. We introduce concepts and define "good" scaling schemes in section (1), as well as explain related work in this area. Scaling has been studied extensively and though there is a broad agreement on its importance, the same cannot be said about what constitutes good scaling. A theoretical framework to scale an m x n real matrix is established in section (2). We use the first order conditions associated with the Euclidean metric to develop iterative schemes in section (2.3) that approximate solution in O(mn) time for real matrice. We discuss symmetry preserving scale factors for an n x n symmetric matrix in (3). The importance of symmetry preservation is discussed in section (3.1). An algorithm to directly compute symmetry preserving scale factors in O(n2) time based on Euclidean metric is presented in section (3.4) We also suggest scaling schemes based on rectilinear norm in section (2.4). Though all p-norms are theoretically equivalent, the importance of outliers increases as p increases. For barrier methods, due to large diagnal corrections, we believe that the taxicab metric (p = 1) may be more appropriate. We develop a linear programming model for it and look at a "reduced" dual that can be formulated as a minimum cost flow problem on networks. We are investigating algorithms to solve it in O(mn) time that we require for an efficient scaling procedure. We hope that in future special structure of the "reduced" dual could be exploited to solve it quickly. The dual information can then be used to compute the required scale factors. We discuss Manhattan metric for symmetric matrices in section (3.5) and as in the case of real matrices, we are unable to propose an efficient computational scheme for this metric. We look at a linearized ideal penalty function that only uses deviations out of the desired range in section (2.5). If we could use such a metric to generate an efficient solution, then we would like to see impact of changing the range on the numerical behavior.

    Towards Highly Scalable Runtime Models with History

    Full text link
    Advanced systems such as IoT comprise many heterogeneous, interconnected, and autonomous entities operating in often highly dynamic environments. Due to their large scale and complexity, large volumes of monitoring data are generated and need to be stored, retrieved, and mined in a time- and resource-efficient manner. Architectural self-adaptation automates the control, orchestration, and operation of such systems. This can only be achieved via sophisticated decision-making schemes supported by monitoring data that fully captures the system behavior and its history. Employing model-driven engineering techniques we propose a highly scalable, history-aware approach to store and retrieve monitoring data in form of enriched runtime models. We take advantage of rule-based adaptation where change events in the system trigger adaptation rules. We first present a scheme to incrementally check model queries in the form of temporal logic formulas which represent the conditions of adaptation rules against a runtime model with history. Then we enhance the model to retain only information that is temporally relevant to the queries, therefore reducing the accumulation of information to a required minimum. Finally, we demonstrate the feasibility and scalability of our approach via experiments on a simulated smart healthcare system employing a real-world medical guideline.Comment: 8 pages, 4 figures, 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS2020

    A global two-layer meta-model for response statistics in robust design optimization

    Get PDF
    Robust design optimization (RDO) of large-scale engineering systems is computationally intensive and requires significant CPU time. Considerable computational effort is still required within conventional meta-model assisted RDO frameworks. The primary objective of this article is to minimize further the computational requirements of meta-model assisted RDO by developing a global two-layered approximation based RDO technique. The meta-model in the inner layer approximates the response quantity and the meta-model in the outer layer approximates the response statistics computed from the response meta-model. This approach eliminates both model building and Monte Carlo simulation from the optimization cycle, and requires considerably fewer actual response evaluations than a single-layered approximation. To demonstrate the approach, two recently developed compressive sensing enabled globally refined Kriging models have been utilized. The proposed framework is applied to one test example and two real-life applications to illustrate clearly its potential to yield robust optimal solutions with minimal computational cost

    Online calibration for simulation-based dynamic traffic assignment : towards large-scale and real-time performance

    Get PDF
    Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 149-152).The severity of traffic congestion is increasing each year in the US, resulting in higher travel times, and increased energy consumption and emissions. They have led to an increasing emphasis on the development of tools for trac management, which intends to alleviate congestion by more eciently utilizing the existing infrastructure. Eective trac management necessitates the generation of accurate short-term predictions of trac states and in this context, simulation-based Dynamic Trac Assignment (DTA) systems have gained prominence over the years. However, a key challenge that remains to be addressed with real-time DTA systems is their scalability and accuracy for applications to large-scale urban networks. A key component of real-time DTA systems that impacts scalability and accuracy is online calibration which attempts to adjust simulation parameters in real-time to match as closely as possible simulated measurements with real-time surveillance data. This thesis contributes to the existing literature on online calibration of DTA systems in three respects: (1) modeling explicitly the stochasticity in simulators and thereby improving accuracy; (2) augmenting the State Space Model (SSM) to capture the delayed measurements on large-scale and congested networks; (3) presenting a gradient estimation procedure called partitioned simultaneous perturbation (PSP) that utilizes an assumed sparse gradient structure to facilitate real-time performance. The results demonstrate that, first, the proposed approach to address stochasticity improves the accuracy of supply calibration on a synthetic network. Second, the augmented SSM improves both estimation and prediction accuracy on a congested synthetic network and the large-scale Singapore expressway network. Finally, compared with the traditional finite difference method, the PSP reduces the number of computations by 90% and achieves the same calibration accuracy on the Singapore expressway network. The proposed methodologies have important applications in the deployment of real-time DTA systems for large scale urban networks.by Haizheng Zhang.Ph. D. in Transportatio

    Dielectrophoresis of micro/nano particles using curved microelectrodes

    Full text link
    Dielectrophoresis, the induced motion of polarisable particles in non-homogenous electric field, has been proven as a versatile mechanism to transport, immobilise, sort and characterise micro/nano scale particle in microfluidic platforms. The performance of dielectrophoretic (DEP) systems depend on two parameters: the configuration of microelectrodes designed to produce the DEP force and the operating strategies devised to employ this force in such processes. This work summarises the unique features of curved microelectrodes for the DEP manipulation of target particles in microfluidic systems. The curved microelectrodes demonstrate exceptional capabilities including (i) creating strong electric fields over a large portion of their structure, (ii) minimising electro-thermal vortices and undesired disturbances at their tips, (iii) covering the entire width of the microchannel influencing all passing particles, and (iv) providing a large trapping area at their entrance region, as evidenced by extensive numerical and experimental analyses. These microelectrodes have been successfully applied for a variety of engineering and biomedical applications including (i) sorting and trapping model polystyrene particles based on their dimensions, (ii) patterning carbon nanotubes to trap low-conductive particles, (iii) sorting live and dead cells based on their dielectric properties, (iv) real-time analysis of drug-induced cell death, and (v) interfacing tumour cells with environmental scanning electron microscopy to study their morphological properties. The DEP systems based on curved microelectrodes have a great potential to be integrated with the future lab-on-a-chip systems.<br /
    • 

    corecore