2,262 research outputs found
Traffic Dynamics of Computer Networks
Two important aspects of the Internet, namely the properties of its topology
and the characteristics of its data traffic, have attracted growing attention
of the physics community. My thesis has considered problems of both aspects.
First I studied the stochastic behavior of TCP, the primary algorithm governing
traffic in the current Internet, in an elementary network scenario consisting
of a standalone infinite-sized buffer and an access link. The effect of the
fast recovery and fast retransmission (FR/FR) algorithms is also considered. I
showed that my model can be extended further to involve the effect of link
propagation delay, characteristic of WAN. I continued my thesis with the
investigation of finite-sized semi-bottleneck buffers, where packets can be
dropped not only at the link, but also at the buffer. I demonstrated that the
behavior of the system depends only on a certain combination of the parameters.
Moreover, an analytic formula was derived that gives the ratio of packet loss
rate at the buffer to the total packet loss rate. This formula makes it
possible to treat buffer-losses as if they were link-losses. Finally, I studied
computer networks from a structural perspective. I demonstrated through fluid
simulations that the distribution of resources, specifically the link
bandwidth, has a serious impact on the global performance of the network. Then
I analyzed the distribution of edge betweenness in a growing scale-free tree
under the condition that a local property, the in-degree of the "younger" node
of an arbitrary edge, is known in order to find an optimum distribution of link
capacity. The derived formula is exact even for finite-sized networks. I also
calculated the conditional expectation of edge betweenness, rescaled for
infinite networks.Comment: phd thesis (135 pages, 62 figures
Traffic measurement and analysis
Measurement and analysis of real traffic is important to gain knowledge
about the characteristics of the traffic. Without measurement, it is
impossible to build realistic traffic models. It is recent that data
traffic was found to have self-similar properties. In this thesis work
traffic captured on the network at SICS and on the Supernet, is shown to
have this fractal-like behaviour. The traffic is also examined with
respect to which protocols and packet sizes are present and in what
proportions. In the SICS trace most packets are small, TCP is shown to be
the predominant transport protocol and NNTP the most common application.
In contrast to this, large UDP packets sent between not well-known ports
dominates the Supernet traffic. Finally, characteristics of the client
side of the WWW traffic are examined more closely. In order to extract
useful information from the packet trace, web browsers use of TCP and HTTP
is investigated including new features in HTTP/1.1 such as persistent
connections and pipelining. Empirical probability distributions are
derived describing session lengths, time between user clicks and the
amount of data transferred due to a single user click. These probability
distributions make up a simple model of WWW-sessions
The Dark Side(-Channel) of Mobile Devices: A Survey on Network Traffic Analysis
In recent years, mobile devices (e.g., smartphones and tablets) have met an
increasing commercial success and have become a fundamental element of the
everyday life for billions of people all around the world. Mobile devices are
used not only for traditional communication activities (e.g., voice calls and
messages) but also for more advanced tasks made possible by an enormous amount
of multi-purpose applications (e.g., finance, gaming, and shopping). As a
result, those devices generate a significant network traffic (a consistent part
of the overall Internet traffic). For this reason, the research community has
been investigating security and privacy issues that are related to the network
traffic generated by mobile devices, which could be analyzed to obtain
information useful for a variety of goals (ranging from device security and
network optimization, to fine-grained user profiling).
In this paper, we review the works that contributed to the state of the art
of network traffic analysis targeting mobile devices. In particular, we present
a systematic classification of the works in the literature according to three
criteria: (i) the goal of the analysis; (ii) the point where the network
traffic is captured; and (iii) the targeted mobile platforms. In this survey,
we consider points of capturing such as Wi-Fi Access Points, software
simulation, and inside real mobile devices or emulators. For the surveyed
works, we review and compare analysis techniques, validation methods, and
achieved results. We also discuss possible countermeasures, challenges and
possible directions for future research on mobile traffic analysis and other
emerging domains (e.g., Internet of Things). We believe our survey will be a
reference work for researchers and practitioners in this research field.Comment: 55 page
Comparative Analysis of Cloud Simulators and Authentication Techniques in Cloud Computing
Cloud computing is the concern of computer hardware and software resources above the internet so that anyone who is connected to the internet can access it as a service or provision in a seamless way. As we are moving more and more towards the application of this newly emerging technology, it is essential to study, evaluate and analyze the performance, security and other related problems that might be encountered in cloud computing. Since, it is not a practicable way to directly examine the behavior of cloud on such problems using the real hardware and software resources due to its high costs, modeling and simulation has become an essential tool to withstand with these issues. In this paper, we retrospect, analyse and compare features of the existing cloud computing simulators and various location based authentication and simulation tools
A unifying perspective on protocol mediation: interoperability in the Future Internet
Given the highly dynamic and extremely heterogeneous software systems composing the Future Internet, automatically achieving interoperability between software components —without modifying them— is more than simply desirable, it is quickly becoming a necessity. Although much work has been carried out on interoperability, existing solutions have not fully succeeded in keeping pace with the increasing complexity and heterogeneity of modern software, and meeting the demands of runtime support. On the one hand, solutions at the application layer target higher automation and loose coupling through the synthesis of intermediary entities, mediators, to compensate for the differences between the interfaces of components and coordinate their behaviours, while assuming the use of the same middleware solution. On the other hand, solutions to interoperability across heterogeneous middleware technologies do not reconcile the differences between components at the application layer. In this paper we propose a unified approach for achieving interoperability between heterogeneous software components with compatible functionalities across the application and middleware layers. First, we provide a solution to automatically generate cross-layer parsers and composers that abstract network messages into a uniform representation independent of the middleware used. Second, these generated parsers and composers are integrated within a mediation framework to support the deployment of the mediators synthesised at the application layer. More specifically, the generated parser analyses the network messages received from one component and transforms them into a representation that can be understood by the application-level mediator. Then, the application-level mediator performs the necessary data conversion and behavioural coordination. Finally, the composer transforms the representation produced by the application-level mediator into network messages that can be sent to the other component. The resulting unified mediation framework reconciles the differences between software components from the application down to the middleware layers. We validate our approach through a case study in the area of conference management
PERFORMANCE CHARACTERISATION OF IP NETWORKS
The initial rapid expansion of the Internet, in terms of complexity and number of hosts, was
followed by an increased interest in its overall parameters and the quality the network offers.
This growth has led, in the first instance, to extensive research in the area of network monitoring,
in order to better understand the characteristics of the current Internet. In parallel, studies were
made in the area of protocol performance modelling, aiming to estimate the performance of
various Internet applications.
A key goal of this research project was the analysis of current Internet traffic performance from a
dual perspective: monitoring and prediction. In order to achieve this, the study has three main
phases. It starts by describing the relationship between data transfer performance and network
conditions, a relationship that proves to be critical when studying application performance. The
next phase proposes a novel architecture of inferring network conditions and transfer parameters
using captured traffic analysis. The final phase describes a novel alternative to current TCP
(Transmission Control Protocol) models, which provides the relationship between network, data
transfer, and client characteristics on one side, and the resulting TCP performance on the other,
while accounting for the features of current Internet transfers.
The proposed inference analysis method for network and transfer parameters uses online nonintrusive
monitoring of captured traffic from a single point. This technique overcomes
limitations of prior approaches that are typically geared towards intrusive and/or dual-point
offline analysis. The method includes several novel aspects, such as TCP timestamp analysis,
which allows bottleneck bandwidth inference and more accurate receiver-based parameter
measurement, which are not possible using traditional acknowledgment-based inference. The
the results of the traffic analysis determine the location of the eventual degradations in network
conditions relative to the position of the monitoring point. The proposed monitoring framework
infers the performance parameters of network paths conditions transited by the analysed traffic,
subject to the position of the monitoring point, and it can be used as a starting point in pro-active
network management.
The TCP performance prediction model is based on the observation that current, potentially
unknown, TCP implementations, as well as connection characteristics, are too complex for a
mathematical model. The model proposed in this thesis uses an artificial intelligence-based
analysis method to establish the relationship between the parameters that influence the evolution
of the TCP transfers and the resulting performance of those transfers. Based on preliminary tests
of classification and function approximation algorithms, a neural network analysis approach was
preferred due to its prediction accuracy.
Both the monitoring method and the prediction model are validated using a combination of
traffic traces, ranging from synthetic transfers / environments, produced using a network
simulator/emulator, to traces produced using a script-based, controlled client and uncontrolled
traces, both using real Internet traffic. The validation tests indicate that the proposed approaches
provide better accuracy in terms of inferring network conditions and predicting transfer
performance in comparison with previous methods. The non-intrusive analysis of the real
network traces provides comprehensive information on the current Internet characteristics,
indicating low-loss, low-delay, and high-bottleneck bandwidth conditions for the majority of the
studied paths.
Overall, this study provides a method for inferring the characteristics of Internet paths based on
traffic analysis, an efficient methodology for predicting TCP transfer performance, and a firm
basis for future research in the areas of traffic analysis and performance modelling
Drone-Assisted Wireless Communications
In order to address the increased demand for any-time/any-where wireless connectivity, both academic and industrial researchers are actively engaged in the design of the fifth generation (5G) wireless communication networks. In contrast to the traditional bottom-up or horizontal design approaches, 5G wireless networks are being co-created with various stakeholders to address connectivity requirements across various verticals (i.e., employing a top-to-bottom approach). From a communication networks perspective, this requires obliviousness under various failures. In the context of cellular networks, base station (BS) failures can be caused either due to a natural or synthetic phenomenon. Natural phenomena such as earthquake or flooding can result in either destruction of communication hardware or disruption of energy supply to BSs. In such cases, there is a dire need for a mechanism through which capacity short-fall can be met in a rapid manner. Drone empowered small cellular networks, or so-called \quotes{flying cellular networks}, present an attractive solution as they can be swiftly deployed for provisioning public safety (PS) networks.
While drone empowered self-organising networks (SONs) and drone small cell networks (DSCNs) have received some attention in the recent past, the design space of such networks has not been extensively traversed. So, the purpose of this thesis is to study the optimal deployment of drone empowered networks in different scenarios and for different applications (i.e., in cellular post-disaster scenarios and briefly in assisting backscatter internet of things (IoT)). To this end, we borrow the well-known tools from stochastic geometry to study the performance of multiple network deployments, as stochastic geometry provides a very powerful theoretical framework that accommodates network scalability and different spatial distributions. We will then investigate the design space of flying wireless networks and we will also explore the co-existence properties of an overlaid DSCN with the operational part of the existing networks. We define and study the design parameters such as optimal altitude and number of drone BSs, etc., as a function of destroyed BSs, propagation conditions, etc. Next, due to capacity and back-hauling limitations on drone small cells (DSCs), we assume that each coverage hole requires a multitude of DSCs to meet the shortfall coverage at a desired quality-of-service (QoS). Hence, we consider the clustered deployment of DSCs around the site of the destroyed BS. Accordingly, joint consideration of partially operating BSs and deployed DSCs yields a unique topology for such PS networks. Hence, we propose a clustering mechanism that extends the traditional Mat\'{e}rn and Thomas cluster processes to a more general case where cluster size is dependent upon the size of the coverage hole. As a result, it is demonstrated that by intelligently selecting operational network parameters such as drone altitude, density, number, transmit power and the spatial distribution of the deployment, ground user coverage can be significantly enhanced.
As another contribution of this thesis, we also present a detailed analysis of the coverage and spectral efficiency of a downlink cellular network. Rather than relying on the first-order statistics of received signal-to-interference-ratio (SIR) such as coverage probability, we focus on characterizing its meta-distribution. As a result, our new design framework reveals that the traditional results which advocate lowering of BS heights or even optimal selection of BS height do not yield consistent service experience across users. Finally, for drone-assisted IoT sensor networks, we develop a comprehensive framework to characterize the performance of a drone-assisted backscatter communication-based IoT sensor network. A statistical framework is developed to quantify the coverage probability that explicitly accommodates a dyadic backscatter channel which experiences deeper fades than that of the one-way Rayleigh channel. We practically implement the proposed system using software defined radio (SDR) and a custom-designed sensor node (SN) tag. The measurements of parameters such as noise figure, tag reflection coefficient etc., are used to parametrize the developed framework
The Pivotal Role of Causality in Local Quantum Physics
In this article an attempt is made to present very recent conceptual and
computational developments in QFT as new manifestations of old and well
establihed physical principles. The vehicle for converting the
quantum-algebraic aspects of local quantum physics into more classical
geometric structures is the modular theory of Tomita. As the above named
laureate to whom I have dedicated has shown together with his collaborator for
the first time in sufficient generality, its use in physics goes through
Einstein causality. This line of research recently gained momentum when it was
realized that it is not only of structural and conceptual innovative power (see
section 4), but also promises to be a new computational road into
nonperturbative QFT (section 5) which, picturesquely speaking, enters the
subject on the extreme opposite (noncommutative) side.Comment: This is a updated version which has been submitted to Journal of
Physics A, tcilatex 62 pages. Adress: Institut fuer Theoretische Physik
FU-Berlin, Arnimallee 14, 14195 Berlin presently CBPF, Rua Dr. Xavier Sigaud
150, 22290-180 Rio de Janeiro, Brazi
- …