273 research outputs found
From Packet to Power Switching: Digital Direct Load Scheduling
At present, the power grid has tight control over its dispatchable generation
capacity but a very coarse control on the demand. Energy consumers are shielded
from making price-aware decisions, which degrades the efficiency of the market.
This state of affairs tends to favor fossil fuel generation over renewable
sources. Because of the technological difficulties of storing electric energy,
the quest for mechanisms that would make the demand for electricity
controllable on a day-to-day basis is gaining prominence. The goal of this
paper is to provide one such mechanisms, which we call Digital Direct Load
Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle
individual requests for energy and digitize them so that they can be
automatically scheduled in a cellular architecture. Specifically, rather than
storing energy or interrupting the job of appliances, we choose to hold
requests for energy in queues and optimize the service time of individual
appliances belonging to a broad class which we refer to as "deferrable loads".
The function of each neighborhood scheduler is to optimize the time at which
these appliances start to function. This process is intended to shape the
aggregate load profile of the neighborhood so as to optimize an objective
function which incorporates the spot price of energy, and also allows
distributed energy resources to supply part of the generation dynamically.Comment: Accepted by the IEEE journal of Selected Areas in Communications
(JSAC): Smart Grid Communications series, to appea
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
FAST Copper for Broadband Access
FAST Copper is a multi-year, U.S. NSF funded project that started in 2004, and is jointly pursued by the research groups of Mung Chiang at Princeton University, John Cioffi at Stanford University, and Alexander Fraser at Fraser Research Lab, and in collaboration with several industrial partners including AT&T. The goal of the FAST Copper Project is to provide ubiquitous, 100 Mbps, fiber/DSL broadband access to everyone in the US with a phone line. This goal will be achieved through two threads of research: dynamic and joint optimization of resources in Frequency, Amplitude, Space, and Time (thus the name 'FAST') to overcome the attenuation and crosstalk bottlenecks, and the integration of communication, networking, computation, modeling, and distributed information management and control for the multi-user twisted pair network
Optimizing the delivery of multimedia over mobile networks
Mención Internacional en el título de doctorThe consumption of multimedia content is moving from a residential environment to mobile
phones. Mobile data traffic, driven mostly by video demand, is increasing rapidly and wireless
spectrum is becoming a more and more scarce resource. This makes it highly important to operate
mobile networks efficiently. To tackle this, recent developments in anticipatory networking
schemes make it possible to to predict the future capacity of mobile devices and optimize the
allocation of the limited wireless resources. Further, optimizing Quality of Experience—smooth,
quick, and high quality playback—is more difficult in the mobile setting, due to the highly dynamic
nature of wireless links. A key requirement for achieving, both anticipatory networking
schemes and QoE optimization, is estimating the available bandwidth of mobile devices. Ideally,
this should be done quickly and with low overhead.
In summary, we propose a series of improvements to the delivery of multimedia over mobile
networks. We do so, be identifying inefficiencies in the interconnection of mobile operators with
the servers hosting content, propose an algorithm to opportunistically create frequent capacity estimations
suitable for use in resource optimization solutions and finally propose another algorithm
able to estimate the bandwidth class of a device based on minimal traffic in order to identify the
ideal streaming quality its connection may support before commencing playback.
The main body of this thesis proposes two lightweight algorithms designed to provide bandwidth
estimations under the high constraints of the mobile environment, such as and most notably
the usually very limited traffic quota. To do so, we begin with providing a thorough overview
of the communication path between a content server and a mobile device. We continue with
analysing how accurate smartphone measurements can be and also go in depth identifying the
various artifacts adding noise to the fidelity of on device measurements. Then, we first propose
a novel lightweight measurement technique that can be used as a basis for advanced resource
optimization algorithms to be run on mobile phones. Our main idea leverages an original packet
dispersion based technique to estimate per user capacity. This allows passive measurements by
just sampling the existing mobile traffic. Our technique is able to efficiently filter outliers introduced
by mobile network schedulers and phone hardware. In order to asses and verify our
measurement technique, we apply it to a diverse dataset generated by both extensive simulations
and a week-long measurement campaign spanning two cities in two countries, different radio
technologies, and covering all times of the day. The results demonstrate that our technique is effective even if it is provided only with a small fraction of the exchanged packets of a flow. The
only requirement for the input data is that it should consist of a few consecutive packets that are
gathered periodically. This makes the measurement algorithm a good candidate for inclusion in
OS libraries to allow for advanced resource optimization and application-level traffic scheduling,
based on current and predicted future user capacity.
We proceed with another algorithm that takes advantage of the traffic generated by short-lived
TCP connections, which form the majority of the mobile connections, to passively estimate the
currently available bandwidth class. Our algorithm is able to extract useful information even if the
TCP connection never exits the slow start phase. To the best of our knowledge, no other solution
can operate with such constrained input. Our estimation method is able to achieve good precision
despite artifacts introduced by the slow start behavior of TCP, mobile scheduler and phone hardware.
We evaluate our solution against traces collected in 4 European countries. Furthermore, the
small footprint of our algorithm allows its deployment on resource limited devices.
Finally, in an attempt to face the rapid traffic increase, mobile application developers outsource
their cloud infrastructure deployment and content delivery to cloud computing services
and content delivery networks. Studying how these services, which we collectively denote Cloud
Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding
some of the performance limitations of today’s mobile apps. To that end, we perform
the first empirical study of the complex dynamics between applications, MNOs and CSPs. First,
we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign
to identify the most prevalent CSPs supporting today’s mobile Internet. Then, we investigate how
well these services interconnect with major European MNOs at a topological level, and measure
their performance over European MNO networks through a month-long measurement campaign
on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs
are used by 85% of apps, and observe significant differences in their performance across different
MNOs due to the nature of their services, peering relationships with MNOs, and deployment
strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming,
and presence of middleboxes, but not influenced by the choice of DNS resolver. We also
observe that the choice of operator’s Point of Presence (PoP) may inflate by at least 20% the
delay towards popular websites.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Ahmed Elmokashfi.- Secretario: Rubén Cuevas Rumín.- Vocal: Paolo Din
Inferring hidden features in the Internet (PhD thesis)
The Internet is a large-scale decentralized system that is composed of thousands of independent networks. In this system, there are two main components, interdomain routing and traffic, that are vital inputs for many tasks such as traffic engineering, security, and business intelligence. However, due to the decentralized structure of the Internet, global knowledge of both interdomain routing and traffic is hard to come by. In this dissertation, we address a set of statistical inference problems with the goal of extending the knowledge of the interdomain-level Internet.
In the first part of this dissertation we investigate the relationship between the interdomain topology and an individual network’s inference ability. We first frame the questions through abstract analysis of idealized topologies, and then use actual routing measurements and topologies to study the ability of real networks to infer traffic flows.
In the second part, we study the ability of networks to identify which paths flow through their network. We first discuss that answering this question is surprisingly hard due to the design of interdomain routing systems where each network can learn only a limited set of routes. Therefore, network operators have to rely on observed traffic. However, observed traffic can only identify that a particular route passes through its network but not that a route does not pass through its network. In order to solve the routing inference problem, we propose a nonparametric inference technique that works quite accurately. The key idea behind our technique is measuring the distances between destinations. In order to accomplish that, we define a metric called Routing State Distance (RSD) to measure distances in terms of routing similarity.
Finally, in the third part, we study our new metric, RSD in detail. Using RSD we address an important and difficult problem of characterizing the set of paths between networks. The collection of the paths across networks is a great source to understand important phenomena in the Internet as path selections are driven by the economic and performance considerations of the networks. We show that RSD has a number of appealing properties that can discover these hidden phenomena
The P-ART framework for placement of virtual network services in a multi-cloud environment
Carriers’ network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes — clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality
The P-ART framework for placement of virtual network services in a multi-cloud environment
Carriers network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality.This publication was made possible by NPRP grant # 8-634-1-131 from the Qatar National Research Fund (a member of Qatar Foundation), National Science Foundation, USA � CNS-1718929 and National Science Foundation, USA � CNS-1547380 .Scopu
The Role of Caching in Future Communication Systems and Networks
This paper has the following ambitious goal: to convince the reader that content caching is an exciting research topic for the future communication systems and networks. Caching has been studied for more than 40 years, and has recently received increased attention from industry and academia. Novel caching techniques promise to push the network performance to unprecedented limits, but also pose significant technical challenges. This tutorial provides a brief overview of existing caching solutions, discusses seminal papers that open new directions in caching, and presents the contributions of this special issue. We analyze the challenges that caching needs to address today, also considering an industry perspective, and identify bottleneck issues that must be resolved to unleash the full potential of this promising technique
- …