1,234 research outputs found
Why (and How) Networks Should Run Themselves
The proliferation of networked devices, systems, and applications that we
depend on every day makes managing networks more important than ever. The
increasing security, availability, and performance demands of these
applications suggest that these increasingly difficult network management
problems be solved in real time, across a complex web of interacting protocols
and systems. Alas, just as the importance of network management has increased,
the network has grown so complex that it is seemingly unmanageable. In this new
era, network management requires a fundamentally new approach. Instead of
optimizations based on closed-form analysis of individual protocols, network
operators need data-driven, machine-learning-based models of end-to-end and
application performance based on high-level policy goals and a holistic view of
the underlying components. Instead of anomaly detection algorithms that operate
on offline analysis of network traces, operators need classification and
detection algorithms that can make real-time, closed-loop decisions. Networks
should learn to drive themselves. This paper explores this concept, discussing
how we might attain this ambitious goal by more closely coupling measurement
with real-time control and by relying on learning for inference and prediction
about a networked application or system, as opposed to closed-form analysis of
individual protocols
Transparent and scalable client-side server selection using netlets
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent
Rusty Clusters? Dusting an IPv6 Research Foundation
The long-running IPv6 Hitlist service is an important foundation for IPv6
measurement studies. It helps to overcome infeasible, complete address space
scans by collecting valuable, unbiased IPv6 address candidates and regularly
testing their responsiveness. However, the Internet itself is a quickly
changing ecosystem that can affect longrunning services, potentially inducing
biases and obscurities into ongoing data collection means. Frequent analyses
but also updates are necessary to enable a valuable service to the community.
In this paper, we show that the existing hitlist is highly impacted by the
Great Firewall of China, and we offer a cleaned view on the development of
responsive addresses. While the accumulated input shows an increasing bias
towards some networks, the cleaned set of responsive addresses is well
distributed and shows a steady increase.
Although it is a best practice to remove aliased prefixes from IPv6 hitlists,
we show that this also removes major content delivery networks. More than 98%
of all IPv6 addresses announced by Fastly were labeled as aliased and
Cloudflare prefixes hosting more than 10M domains were excluded. Depending on
the hitlist usage, e.g., higher layer protocol scans, inclusion of addresses
from these providers can be valuable.
Lastly, we evaluate different new address candidate sources, including target
generation algorithms to improve the coverage of the current IPv6 Hitlist. We
show that a combination of different methodologies is able to identify 5.6M
new, responsive addresses. This accounts for an increase by 174% and combined
with the current IPv6 Hitlist, we identify 8.8M responsive addresses
- …