599,949 research outputs found
The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena
The Internet is the most complex system ever created in human history.
Therefore, its dynamics and traffic unsurprisingly take on a rich variety of
complex dynamics, self-organization, and other phenomena that have been
researched for years. This paper is a review of the complex dynamics of
Internet traffic. Departing from normal treatises, we will take a view from
both the network engineering and physics perspectives showing the strengths and
weaknesses as well as insights of both. In addition, many less covered
phenomena such as traffic oscillations, large-scale effects of worm traffic,
and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex
System
Graph Annotations in Modeling Complex Network Topologies
The coarsest approximation of the structure of a complex network, such as the
Internet, is a simple undirected unweighted graph. This approximation, however,
loses too much detail. In reality, objects represented by vertices and edges in
such a graph possess some non-trivial internal structure that varies across and
differentiates among distinct types of links or nodes. In this work, we
abstract such additional information as network annotations. We introduce a
network topology modeling framework that treats annotations as an extended
correlation profile of a network. Assuming we have this profile measured for a
given network, we present an algorithm to rescale it in order to construct
networks of varying size that still reproduce the original measured annotation
profile.
Using this methodology, we accurately capture the network properties
essential for realistic simulations of network applications and protocols, or
any other simulations involving complex network topologies, including modeling
and simulation of network evolution. We apply our approach to the Autonomous
System (AS) topology of the Internet annotated with business relationships
between ASs. This topology captures the large-scale structure of the Internet.
In depth understanding of this structure and tools to model it are cornerstones
of research on future Internet architectures and designs. We find that our
techniques are able to accurately capture the structure of annotation
correlations within this topology, thus reproducing a number of its important
properties in synthetically-generated random graphs
Self-similarity of complex networks
Complex networks have been studied extensively due to their relevance to many
real systems as diverse as the World-Wide-Web (WWW), the Internet, energy
landscapes, biological and social networks
\cite{ab-review,mendes,vespignani,newman,amaral}. A large number of real
networks are called ``scale-free'' because they show a power-law distribution
of the number of links per node \cite{ab-review,barabasi1999,faloutsos}.
However, it is widely believed that complex networks are not {\it length-scale}
invariant or self-similar. This conclusion originates from the ``small-world''
property of these networks, which implies that the number of nodes increases
exponentially with the ``diameter'' of the network
\cite{erdos,bollobas,milgram,watts}, rather than the power-law relation
expected for a self-similar structure. Nevertheless, here we present a novel
approach to the analysis of such networks, revealing that their structure is
indeed self-similar. This result is achieved by the application of a
renormalization procedure which coarse-grains the system into boxes containing
nodes within a given "size". Concurrently, we identify a power-law relation
between the number of boxes needed to cover the network and the size of the box
defining a finite self-similar exponent. These fundamental properties, which
are shown for the WWW, social, cellular and protein-protein interaction
networks, help to understand the emergence of the scale-free property in
complex networks. They suggest a common self-organization dynamics of diverse
networks at different scales into a critical state and in turn bring together
previously unrelated fields: the statistical physics of complex networks with
renormalization group, fractals and critical phenomena.Comment: 28 pages, 12 figures, more informations at http://www.jamlab.or
Scaling Causality Analysis for Production Systems.
Causality analysis reveals how program values influence each other.
It is important for debugging, optimizing, and understanding the execution of
programs. This thesis scales causality analysis to production systems
consisting of desktop and server applications as well as large-scale Internet
services. This enables developers to employ causality analysis to debug and
optimize complex, modern software systems. This thesis shows that it is
possible to scale causality analysis to both fine-grained instruction level
analysis and analysis of Internet scale distributed systems with thousands of
discrete software components by developing and employing automated methods to
observe and reason about causality.
First, we observe causality at a fine-grained instruction level by developing
the first taint tracking framework to support tracking millions of input
sources. We also introduce flexible taint tracking to allow
for scoping different queries and dynamic filtering of inputs, outputs, and
relationships.
Next, we introduce the Mystery Machine, which uses a ``big data'' approach to
discover causal relationships between software components in a large-scale
Internet service. We leverage the fact that large-scale Internet services
receive a large number of requests in order to observe counterexamples to
hypothesized causal relationships. Using discovered casual relationships, we
identify the critical path for request execution and use the critical path
analysis to explore potential scheduling optimizations.
Finally, we explore using causality to make data-quality tradeoffs in
Internet services. A data-quality tradeoff is an explicit decision by a software
component to return lower-fidelity data in order to improve response time or
minimize resource usage. We perform a study of data-quality tradeoffs in a
large-scale Internet service to show the pervasiveness of these
tradeoffs. We develop DQBarge, a system that enables better data-quality
tradeoffs by propagating critical information along the causal path of request
processing. Our evaluation shows that DQBarge helps Internet services mitigate
load spikes, improve utilization of spare resources, and implement dynamic
capacity planning.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135888/1/mcchow_1.pd
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
JXTA-Overlay: a P2P platform for distributed, collaborative, and ubiquitous computing
With the fast growth of the Internet infrastructure and the use of large-scale complex applications in industries, transport, logistics, government, health, and businesses, there is an increasing need to design and deploy multifeatured networking applications. Important features of such applications include the capability to be self-organized, be decentralized, integrate different types of resources (personal computers, laptops, and mobile and sensor devices), and provide global, transparent, and secure access to resources. Moreover, such applications should support not only traditional forms of reliable distributing computing and optimization of resources but also various forms of collaborative activities, such as business, online learning, and social networks in an intelligent and secure environment. In this paper, we present the Juxtapose (JXTA)-Overlay, which is a JXTA-based peer-to-peer (P2P) platform designed with the aim to leverage capabilities of Java, JXTA, and P2P technologies to support distributed and collaborative systems. The platform can be used not only for efficient and reliable distributed computing but also for collaborative activities and ubiquitous computing by integrating in the platform end devices. The design of a user interface as well as security issues are also tackled. We evaluate the proposed system by experimental study and show its usefulness for massive processing computations and e-learning applications.Peer ReviewedPostprint (author's final draft
- …