310 research outputs found

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    An analytic finite capacity queueing network model capturing blocking, congestion and spillbacks

    Get PDF
    Analytic queueing network models often assume infinite capacity for all queues. For real systems this infinite capacity assumption does not hold, but is often maintained due to the difficulty of grasping the between-queue correlation structure present in finite capacity networks. This correlation structure helps explain bottleneck effects and spillbacks, the latter being of special interest in networks containing loops because they are a source of potential deadlock. We present an analytic queueing network model which acknowledges the finite capacity of the different queues. By explicitly modeling the blocking phase the model yields a description of the congestion effects. The model is adapted for multiple server finite capacity queueing networks with an arbitrary topology and blocking-after-service. A decomposition method allowing the evaluation of the model is described. The method is validated, by comparison to both pre-existing methods and simulation results. A real application to the study of patient flow in a network of operative and post-operative units of the Geneva University Hospital is also presented

    Modeling Patient Flow in a Network of Intensive Care Units (ICUs)

    Get PDF
    Beginning in 2012, the Department of Health and Human Services (HHS) started adjusting payment for specific conditions by 30% for hospitals with 30-day patient readmission rates higher than the 75th percentile (HHS.gov, 2011). Furthermore, starting in 2013, HHS requires hospitals to publish their readmission rates (HHS.gov, 2011). It is also estimated that by 2013, healthcare expenditures in the United States will account for 18.7% of the Gross Domestic Product (GDP) (Centers of Medicare and Medicaid Services and US Bureau of Census, 2004). Yet the US healthcare system still suffers from congestion and rising costs as illustrated by hospital congestion. One way to reduce congestion and improve patient flow in the hospital is by modeling patient flow. Using queueing theory, we determined the steady state solution of an open queueing network, while accounting for instantaneous and delayed feedback. We also built a discrete event simulation model of patient flow in a network of Intensive Care Units (ICUs), while considering instantaneous and delayed readmissions, and validated the model using real patient flow data that was collected over four years. In addition, we compared several statistical and data mining techniques in terms of classifying patient status at discharge from the ICU (highly imbalanced data) and identify methods that perform the best. Our work has several contributions. Modeling patient flow while accounting for instantaneous and delayed feedback is considered a major contribution, as we are unaware of any patient flow study that has done so. Validating the discrete event simulation model allows for the implementation and application of the model in the real world by unit managers and administrators. The simulation model could be used to test different scenarios of patient flow, and to identify optimal resource allocation strategies in terms of number of beds and/or staff schedules in order to maximize patient throughput, reduce patient wait time and improve patients’ outcome. Moreover, identifying high risk patients who are more likely to die in the ICU ensures that those patients are receiving appropriate and timely care, so their risk of death is reduced

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework

    4. generĂĄciĂłs mobil rendszerek kutatĂĄsa = Research on 4-th Generation Mobile Systems

    Get PDF
    A 3G mobil rendszerek szabvĂĄnyosĂ­tĂĄsa a vĂ©gĂ©hez közeledik, legalĂĄbbis a meghatĂĄrozĂł kĂ©pessĂ©gek tekintetĂ©ben. EzĂ©rt lĂ©tfontossĂĄgĂș azon technikĂĄk, eljĂĄrĂĄsok vizsgĂĄlata, melyek a következƑ, 4G rendszerekben meghatĂĄrozĂł szerepet töltenek majd be. Több ilyen kutatĂĄsi irĂĄnyvonal is lĂ©tezik, ezek közĂŒl projektĂŒnkben a fontosabbakra koncentrĂĄltunk. A következƑben felsoroljuk a kutatott terĂŒleteket, Ă©s röviden összegezzĂŒk az elĂ©rt eredmĂ©nyeket. SzĂłrt spektrumĂș rendszerek KifejlesztettĂŒnk egy Ășj, rĂĄdiĂłs interfĂ©szen alkalmazhatĂł hĂ­vĂĄsengedĂ©lyezĂ©si eljĂĄrĂĄst. SzimulĂĄciĂłs vizsgĂĄlatokkal tĂĄmasztottuk alĂĄ a megoldĂĄs hatĂ©konysĂĄgĂĄt. A projektben kutatĂłkĂ©nt rĂ©sztvevƑ Jeney GĂĄbor sikeresen megvĂ©dte Ph.D. disszertĂĄciĂłjĂĄt neurĂĄlis hĂĄlĂłzatokra Ă©pĂŒlƑ többfelhasznĂĄlĂłs detekciĂłs technikĂĄk tĂ©mĂĄban. Az elĂ©rt eredmĂ©nyek Imre SĂĄndor MTA doktori disszertĂĄciĂłjĂĄba is beĂ©pĂŒltek. IP alkalmazĂĄsa mobil rendszerekben TovĂĄbbfejlesztettĂŒk, teszteltĂŒk Ă©s ĂĄltalĂĄnosĂ­tottuk a projekt keretĂ©ben megalkotott Ășj, gyƱrƱ alapĂș topolĂłgiĂĄra Ă©pĂŒlƑ, a jelenleginĂ©l nagyobb megbĂ­zhatĂłsĂĄgĂș IP alapĂș hozzĂĄfĂ©rĂ©si koncepciĂłt. A tĂ©makörben Szalay MĂĄtĂ© Ph.D. disszertĂĄciĂłja mĂĄr a nyilvĂĄnos vĂ©dĂ©sig jutott. Kvantum-informatikai mĂłdszerek alkalmazĂĄsa 3G/4G detekciĂłra Új, kvantum-informatikai elvekre Ă©pĂŒlƑ többfelhasznĂĄlĂłs detekciĂłs eljĂĄrĂĄst dolgoztunk ki. Ehhez Ășj kvantum alapĂș algoritmusokat is kifejlesztettĂŒnk. Az eredmĂ©nyeket nemzetközi folyĂłiratok mellett egy sajĂĄt könyvben is publikĂĄltuk. | The project consists of three main research directions. Spread spectrum systems: we developed a new call admission control method for 3G air interfaces. Project member Gabor Jeney obtained the Ph.D. degree and project leader Sandor Imre submitted his DSc theses from this area. Application of IP in mobile systems: A ring-based reliable IP mobility mobile access concept and corresponding protocols have been developed. Project member MĂĄtĂ© Szalay submitted his Ph.D. theses from this field. Quantum computing based solutions in 3G/4G detection: Quantum computing based multiuser detection algorithm was developed. Based on the results on this field a book was published at Wiley entitled: 'Quantum Computing and Communications - an engineering approach'

    EARLY PERFORMANCE PREDICTION METHODOLOGY FOR MANY-CORES ON CHIP BASED APPLICATIONS

    Get PDF
    Modern high performance computing applications such as personal computing, gaming, numerical simulations require application-specific integrated circuits (ASICs) that comprises of many cores. Performance for these applications depends mainly on latency of interconnects which transfer data between cores that implement applications by distributing tasks. Time-to-market is a critical consideration while designing ASICs for these applications. Therefore, to reduce design cycle time, predicting system performance accurately at an early stage of design is essential. With process technology in nanometer era, physical phenomena such as crosstalk, reflection on the propagating signal have a direct impact on performance. Incorporating these effects provides a better performance estimate at an early stage. This work presents a methodology for better performance prediction at an early stage of design, achieved by mapping system specification to a circuit-level netlist description. At system-level, to simplify description and for efficient simulation, SystemVerilog descriptions are employed. For modeling system performance at this abstraction, queueing theory based bounded queue models are applied. At the circuit level, behavioral Input/Output Buffer Information Specification (IBIS) models can be used for analyzing effects of these physical phenomena on on-chip signal integrity and hence performance. For behavioral circuit-level performance simulation with IBIS models, a netlist must be described consisting of interacting cores and a communication link. Two new netlists, IBIS-ISS and IBIS-AMI-ISS are introduced for this purpose. The cores are represented by a macromodel automatically generated by a developed tool from IBIS models. The generated IBIS models are employed in the new netlists. Early performance prediction methodology maps a system specification to an instance of these netlists to provide a better performance estimate at an early stage of design. The methodology is scalable in nanometer process technology and can be reused in different designs

    Manufacturing flow line systems: a review of models and analytical results

    Get PDF
    The most important models and results of the manufacturing flow line literature are described. These include the major classes of models (asynchronous, synchronous, and continuous); the major features (blocking, processing times, failures and repairs); the major properties (conservation of flow, flow rate-idle time, reversibility, and others); and the relationships among different models. Exact and approximate methods for obtaining quantitative measures of performance are also reviewed. The exact methods are appropriate for small systems. The approximate methods, which are the only means available for large systems, are generally based on decomposition, and make use of the exact methods for small systems. Extensions are briefly discussed. Directions for future research are suggested.National Science Foundation (U.S.) (Grant DDM-8914277

    The power-series algorithm:A numerical approach to Markov processes

    Get PDF
    Abstract: The development of computer and communication networks and flexible manufacturing systems has led to new and interesting multidimensional queueing models. The Power-Series Algorithm is a numerical method to analyze and optimize the performance of such models. In this thesis, the applicability of the algorithm is extended. This is illustrated by introducing and analyzing a wide class of queueing networks with very general dependencies between the different queues. The theoretical basis of the algorithm is strengthened by proving analyticity of the steady-state distribution in light traffic and finding remedies for previous imperfections of the method. Applying similar ideas to the transient distribution renders new analyticity results. Various aspects of Markov processes, analytic functions and extrapolation methods are reviewed, necessary for a thorough understanding and efficient implementation of the Power-Series Algorithm.
    • 

    corecore