23 research outputs found

    Approximation of Message Inter-Arrival and Inter-Departure Time Distributions in IMS/NGN Architecture Using Phase-Type Distributions, Journal of Telecommunications and Information Technology, 2013, nr 3

    Get PDF
    Currently it is assumed that requirements of the information society for delivering multimedia services will be satisfied by the Next Generation Network (NGN) architecture, which includes elements of the IP Multimedia Subsystem (IMS) solution. In order to guarantee Quality of Service (QoS), NGN has to be appropriately designed and dimensioned. Therefore, proper traffic models should be proposed and applied. This requires determination of queuing models adequate to message inter-arrival and inter-departure time distributions in the network. In the paper the above mentioned distributions in different points of a single domain of NGN are investigated, using a simulation model developed according to the latest standards and research. Relations between network parameters and obtained message inter-arrival as well as inter-departure time distributions are indicated. Moreover, possibility of approximating the above mentioned distributions using phase-type distributions is investigated, which can be helpful in identifying proper queuing models and constructing an analytical model suitable for NGN

    Journal of Telecommunications and Information Technology, 2008, nr 3

    Get PDF
    kwartalni

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Runtime Prediction for Scale-Out Data Analytics

    Get PDF
    Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., "Find the average user time spent on the top 10 most popular web pages on the UK domain web graph."). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (i.e., data statistics and processing costs), the execution configuration (i.e., deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not "owned" by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a training methodology and a pruning algorithm we reduce the cost of running training queries to a minimum while maintaining a good level of accuracy for the models

    Virtual Machine Flow Analysis Using Host Kernel Tracing

    Get PDF
    L’infonuagique a beaucoup gagnĂ© en popularitĂ© car elle permet d’offrir des services Ă  coĂ»t rĂ©duit, avec le modĂšle Ă©conomique Pay-to-Use, un stockage illimitĂ© avec les systĂšmes de stockage distribuĂ©, et une grande puissance de calcul grĂące Ă  l’accĂšs direct au matĂ©riel. La technologie de virtualisation permet de partager un serveur physique entre plusieurs environnements virtualisĂ©s isolĂ©s, en dĂ©ployant une couche logicielle (Hyperviseur) au-dessus du matĂ©riel. En consĂ©quence, les environnements isolĂ©s peuvent fonctionner avec des systĂšmes d’exploitation et des applications diffĂ©rentes, sans interfĂ©rence mutuelle. La croissance du nombre d’utilisateurs des services infonuagiques et la dĂ©mocratisation de la technologie de virtualisation prĂ©sentent un nouveau dĂ©fi pour les fournisseurs de services infonuagiques. Fournir une bonne qualitĂ© de service et une haute disponibilitĂ© est une exigence principale pour l’infonuagique. La raison de la dĂ©gradation des performances d’une machine virtuelle peut ĂȘtre nombreuses. a ActivitĂ© intense d’une application Ă  l’intĂ©rieur de la machine virtuelle. b Conflits avec d’autres applications Ă  l’intĂ©rieur de la machine mĂȘme virtuelle. c Conflits avec d’autres machines virtuelles qui roulent sur la mĂȘme machine physique. d Échecs de la plateforme infonuagique. Les deux premiers cas peuvent ĂȘtre gĂ©rĂ©s par le propriĂ©taire de la machine virtuelle et les autres cas doivent ĂȘtre rĂ©solus par le fournisseur de l’infrastructure infonuagique. Ces infrastructures sont gĂ©nĂ©ralement trĂšs complexes et peuvent contenir diffĂ©rentes couches de virtualisation. Il est donc nĂ©cessaire d’avoir un outil d’analyse Ă  faible surcoĂ»t pour dĂ©tecter ces types de problĂšmes. Dans cette thĂšse, nous prĂ©sentons une mĂ©thode prĂ©cise permettant de rĂ©cupĂ©rer le flux d’exĂ©cution des environnements virtualisĂ©s Ă  partir de la machine hĂŽte, quel que soit le niveau de la virtualisation. Pour Ă©viter des problĂšmes de sĂ©curitĂ©, faciliter le dĂ©ploiement et minimiser le surcoĂ»t, notre mĂ©thode limite la collecte de donnĂ©es au niveau de l’hyperviseur. Pour analyser le comportement des machines virtuelles, nous utilisons un outil de traçage lĂ©ger appelĂ© Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng est capable d’effectuer un traçage Ă  haut dĂ©bit et Ă  faible surcoĂ»t, grĂące aux mĂ©canismes de synchronisation sans verrous utilisĂ©s pour mettre Ă  jour le contenu des tampons de traçage.----------ABSTRACT: Cloud computing has gained popularity as it offers services at lower cost, with Pay-per-Use model, unlimited storage, with distributed storage, and flexible computational power, with direct hardware access. Virtualization technology allows to share a physical server, between several isolated virtualized environments, by deploying an hypervisor layer on top of hardware. As a result, each isolated environment can run with its OS and application without mutual interference. With the growth of cloud usage, and the use of virtualization, performance understanding and debugging are becoming a serious challenge for Cloud providers. Offering a better QoS and high availability are expected to be salient features of cloud computing. Nonetheless, possible reasons behind performance degradation in VMs are numerous. a) Heavy load of an application inside the VM. b) Contention with other applications inside the VM. c) Contention with other co-located VMs. d) Cloud platform failures. The first two cases can be managed by the VM owner, while the other cases need to be solved by the infrastructure provider. One key requirement for such a complex environment, with different virtualization layers, is a precise low overhead analysis tool. In this thesis, we present a host-based, precise method to recover the execution flow of virtualized environments, regardless of the level of nested virtualization. To avoid security issues, ease deployment and reduce execution overhead, our method limits its data collection to the hypervisor level. In order to analyse the behavior of each VM, we use a lightweight tracing tool called the Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng is optimised for high throughput tracing with low overhead, thanks to its lock-free synchronization mechanisms used to update the trace buffer content

    Design Data Collection with Skylab Microwave Radiometer-Scatterometer S-193, Volume 1

    Get PDF
    The author has identified the following significant results. Observations with S-193 have provided radar design information for systems to be flown on spacecraft, but only at 13.9 GHz and for land areas over the United States and Brazil plus a few other areas of the world for which this kind of analysis was not made. Observations only extended out to about 50 deg angle of incidence. The value of a sensor with such a gross resolution for most overland resource and status monitoring systems seems marginal, with the possible exception of monitoring soil moisture and major vegetation variations. The complementary nature of the scatterometer and radiometer systems was demonstrated by the correlation analysis. Although radiometers must have spatial resolutions dictated by antenna size, radars can use synthetic aperture techniques to achieve much finer resolutions. Multiplicity of modes in the S-193 sensors complicated both the system development and its employment. An attempt was made in the design of the S-193 to arrange optimum integration times for each angle and type of measurement. This unnecessarily complicated the design of the instrument, since the gains in precision achieved in this way were marginal. Either a software-controllable integration time or a set of only two or three integration times would have been better
    corecore