386 research outputs found

    Decomposition-based analysis of queueing networks

    Get PDF
    Model-based numerical analysis is an important branch of the model-based performance evaluation. Especially state-oriented formalisms and methods based on Markovian processes, like stochastic Petri nets and Markov chains, have been successfully adopted because they are mathematically well understood and allow the intuitive modeling of many processes of the real world. However, these methods are sensitive to the well-known phenomenon called state space explosion. One way to handle this problem is the decomposition approach.\ud In this thesis, we present a decomposition framework for the analysis of a fairly general class of open and closed queueing networks. The decomposition is done at queueing station level, i.e., the queueing stations are independently analyzed. During the analysis, traffic descriptors are exchanged between the stations, representing the streams of jobs flowing between them. Networks with feedback are analyzed using a fixed-point iteration

    Report of the Third Workshop on the Usage of NetFlow/IPFIX in Network Management

    Get PDF
    The Network Management Research Group (NMRG) organized in 2010 the Third Workshop on the Usage of NetFlow/IPFIX in Network Management, as part of the 78th IETF Meeting in Maastricht. Yearly organized since 2007, the workshop is an opportunity for people from both academia and industry to discuss the latest developments of the protocol, possibilities for new applications, and practical experiences. This report summarizes the presentations and the main conclusions of the workshop

    A Fixed-Point Algorithm for Closed Queueing Networks

    Get PDF
    In this paper we propose a new efficient iterative scheme for solving closed queueing networks with phase-type service time distributions. The method is especially efficient and accurate in case of large numbers of nodes and large customer populations. We present the method, put it in perspective, and validate it through a large number of test scenarios. In most cases, the method provides accuracies within 5% relative error (in comparison to discrete-event simulation)

    Mathematical models for HIV-1 viral capsid structure and assembly

    Get PDF
    Includes bibliographical references.2015 Summer.HIV-1 (human immunodeficiency virus type 1) is a retrovirus that causes the acquired immunodeficiency syndrome (AIDS). This infectious disease has high mortality rates, encouraging HIV-1 to receive extensive research interest from scientists of multiple disciplines. Group-specific antigen (Gag) polyprotein precursor is the major structural component of HIV. This protein has 4 major domains, one of which is called the capsid (CA). These proteins join together to create the peculiar structure of HIV-1 virions. It is known that retrovirus capsid arrangements represent a fullerene-like structure. These caged polyhedral arrangements are built entirely from hexamers (6 joined proteins) and exactly 12 pentamers (5 proteins) by the Euler theorem. Different distributions of these 12 pentamers result in icosahedral, tubular, or the unique HIV-1 conical shaped capsids. In order to gain insight into the distinctive structure of the HIV capsid, we develop and analyze mathematical models to help understand the underlying biological mechanisms in the formation of viral capsids. The pentamer clusters introduce declination and hence curvature on the capsids. The HIV-1 capsid structure follows a (5,7)-cone pattern, with 5 pentamers in the narrow end and 7 in the broad end. We show that the curvature concentration at the narrow end is about five times higher than that at the broad end. This leads to a conclusion that the narrow end is the weakest part on the HIV-1 capsid and a conjecture that “the narrow end closes last during maturation but opens first during entry into a host cell.” Models for icosahedral capsids are established and well-received, but models for tubular and conical capsids need further investigation. We propose new models for the tubular and conical capsid based on an extension of the Caspar-Klug quasi-equivalence theory. In particular, two and three generating vectors are used to characterize respectively the lattice structures of tubular and conical capsids. Comparison with published HIV-1 data demonstrates a good agreement of our modeling results with experimental data. It is known that there are two stages in the viral capsid assembly: nucleation (formation of a nuclei: hexamers) and elongation (building the closed shell). We develop a kinetic model for modeling HIV-1 viral capsid nucleation using a 6-species dynamical system. Numerical simulations of capsid protein (CA) multimer concentrations closely match experimental data. Sensitivity and elasticity analysis of CA multimer concentrations with respect to the association and disassociation rates further reveals the importance of CA dimers in the nucleation stage of viral capsid self-assembly

    The pseudo-self-similar traffic model: application and validation

    Get PDF
    Since the early 1990¿s, a variety of studies has shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures of interest. Instead of developing completely new traffic models, a number of researchers have proposed to adapt traditional traffic modelling approaches to incorporate aspects of self-similarity. The motivation for doing so is the hope to be able to reuse techniques and tools that have been developed in the past and with which experience has been gained. One such approach for a traffic model that incorporates aspects of self-similarity is the so-called pseudo self-similar traffic model. This model is appealing, as it is easy to understand and easily embedded in Markovian performance evaluation studies. In applying this model in a number of cases, we have perceived various problems which we initially thought were particular to these specific cases. However, we recently have been able to show that these problems are fundamental to the pseudo self-similar traffic model. In this paper we review the pseudo self-similar traffic model and discuss its fundamental shortcomings. As far as we know, this is the first paper that discusses these shortcomings formally. We also report on ongoing work to overcome some of these problems

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    Autonomic Parameter Tuning of Anomaly-Based IDSs: an SSH Case Study

    Get PDF
    Anomaly-based intrusion detection systems classify network traffic instances by comparing them with a model of the normal network behavior. To be effective, such systems are expected to precisely detect intrusions (high true positive rate) while limiting the number of false alarms (low false positive rate). However, there exists a natural trade-off between detecting all anomalies (at the expense of raising alarms too often), and missing anomalies (but not issuing any false alarms). The parameters of a detection system play a central role in this trade-off, since they determine how responsive the system is to an intrusion attempt. Despite the importance of properly tuning the system parameters, the literature has put little emphasis on the topic, and the task of adjusting such parameters is usually left to the expertise of the system manager or expert IT personnel. In this paper, we present an autonomic approach for tuning the parameters of anomaly-based intrusion detection systems in case of SSH traffic. We propose a procedure that aims to automatically tune the system parameters and, by doing so, to optimize the system performance. We validate our approach by testing it on a flow-based probabilistic detection system for the detection of SSH attacks

    Simpleweb/University of Twente Traffic Traces Data Repository

    Get PDF
    The computer networks research community lacks of shared measurement information. As a consequence, most researchers need to expend a considerable part of their time planning and executing measurements before being able to perform their studies. The lack of shared data also makes it hard to compare and validate results. This report describes our efforts to distribute a portion of our network data through the Simpleweb/University of Twente Traffic Traces Data Repository

    Inside Dropbox: Understanding Personal Cloud Storage Services

    Get PDF
    Personal cloud storage services are gaining popularity. With a rush of providers to enter the market and an increasing of- fer of cheap storage space, it is to be expected that cloud storage will soon generate a high amount of Internet traffic. Very little is known about the architecture and the perfor- mance of such systems, and the workload they have to face. This understanding is essential for designing efficient cloud storage systems and predicting their impact on the network. This paper presents a characterization of Dropbox, the leading solution in personal cloud storage in our datasets. By means of passive measurements, we analyze data from four vantage points in Europe, collected during 42 consecu- tive days. Our contributions are threefold: Firstly, we are the first to study Dropbox, which we show to be the most widely-used cloud storage system, already accounting for a volume equivalent to around one third of the YouTube traffic at campus networks on some days. Secondly, we characterize the workload typical users in different environments gener- ate to the system, highlighting how this reflects on network traffic. Lastly, our results show possible performance bot- tlenecks caused by both the current system architecture and the storage protocol. This is exacerbated for users connected far from control and storage data-center
    corecore