10,011 research outputs found
An active, ontology-driven network service for Internet collaboration
Web portals have emerged as an important means of collaboration on the WWW, and the integration of ontologies promises to make them more accurate in how they serve users’ collaboration and information location requirements. However, web portals are essentially a centralised architecture resulting in difficulties supporting seamless roaming between portals and collaboration between groups supported on different portals. This paper proposes an alternative approach to collaboration over the web using ontologies that is de-centralised and exploits content-based networking. We argue that this approach promises a user-centric, timely, secure and location-independent mechanism, which is potentially more scaleable and universal than existing centralised portals
From Linked Data to Relevant Data -- Time is the Essence
The Semantic Web initiative puts emphasis not primarily on putting data on
the Web, but rather on creating links in a way that both humans and machines
can explore the Web of data. When such users access the Web, they leave a trail
as Web servers maintain a history of requests. Web usage mining approaches have
been studied since the beginning of the Web given the log's huge potential for
purposes such as resource annotation, personalization, forecasting etc.
However, the impact of any such efforts has not really gone beyond generating
statistics detailing who, when, and how Web pages maintained by a Web server
were visited.Comment: 1st International Workshop on Usage Analysis and the Web of Data
(USEWOD2011) in the 20th International World Wide Web Conference (WWW2011),
Hyderabad, India, March 28th, 201
On the Intrinsic Locality Properties of Web Reference Streams
There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged.
In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web.
Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality.
We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.National Science Foundation (ANI-9986397, ANI-0095988); CNPq-Brazi
Beyond Counting: New Perspectives on the Active IPv4 Address Space
In this study, we report on techniques and analyses that enable us to capture
Internet-wide activity at individual IP address-level granularity by relying on
server logs of a large commercial content delivery network (CDN) that serves
close to 3 trillion HTTP requests on a daily basis. Across the whole of 2015,
these logs recorded client activity involving 1.2 billion unique IPv4
addresses, the highest ever measured, in agreement with recent estimates.
Monthly client IPv4 address counts showed constant growth for years prior, but
since 2014, the IPv4 count has stagnated while IPv6 counts have grown. Thus, it
seems we have entered an era marked by increased complexity, one in which the
sole enumeration of active IPv4 addresses is of little use to characterize
recent growth of the Internet as a whole.
With this observation in mind, we consider new points of view in the study of
global IPv4 address activity. Our analysis shows significant churn in active
IPv4 addresses: the set of active IPv4 addresses varies by as much as 25% over
the course of a year. Second, by looking across the active addresses in a
prefix, we are able to identify and attribute activity patterns to network
restructurings, user behaviors, and, in particular, various address assignment
practices. Third, by combining spatio-temporal measures of address utilization
with measures of traffic volume, and sampling-based estimates of relative host
counts, we present novel perspectives on worldwide IPv4 address activity,
including empirical observation of under-utilization in some areas, and
complete utilization, or exhaustion, in others.Comment: in Proceedings of ACM IMC 201
An Immune Inspired Approach to Anomaly Detection
The immune system provides a rich metaphor for computer security: anomaly
detection that works in nature should work for machines. However, early
artificial immune system approaches for computer security had only limited
success. Arguably, this was due to these artificial systems being based on too
simplistic a view of the immune system. We present here a second generation
artificial immune system for process anomaly detection. It improves on earlier
systems by having different artificial cell types that process information.
Following detailed information about how to build such second generation
systems, we find that communication between cells types is key to performance.
Through realistic testing and validation we show that second generation
artificial immune systems are capable of anomaly detection beyond generic
system policies. The paper concludes with a discussion and outline of the next
steps in this exciting area of computer security.Comment: 19 pages, 4 tables, 2 figures, Handbook of Research on Information
Security and Assuranc
Agents, Bookmarks and Clicks: A topical model of Web traffic
Analysis of aggregate and individual Web traffic has shown that PageRank is a
poor model of how people navigate the Web. Using the empirical traffic patterns
generated by a thousand users, we characterize several properties of Web
traffic that cannot be reproduced by Markovian models. We examine both
aggregate statistics capturing collective behavior, such as page and link
traffic, and individual statistics, such as entropy and session size. No model
currently explains all of these empirical observations simultaneously. We show
that all of these traffic patterns can be explained by an agent-based model
that takes into account several realistic browsing behaviors. First, agents
maintain individual lists of bookmarks (a non-Markovian memory mechanism) that
are used as teleportation targets. Second, agents can retreat along visited
links, a branching mechanism that also allows us to reproduce behaviors such as
the use of a back button and tabbed browsing. Finally, agents are sustained by
visiting novel pages of topical interest, with adjacent pages being more
topically related to each other than distant ones. This modulates the
probability that an agent continues to browse or starts a new session, allowing
us to recreate heterogeneous session lengths. The resulting model is capable of
reproducing the collective and individual behaviors we observe in the empirical
data, reconciling the narrowly focused browsing patterns of individual users
with the extreme heterogeneity of aggregate traffic measurements. This result
allows us to identify a few salient features that are necessary and sufficient
to interpret the browsing patterns observed in our data. In addition to the
descriptive and explanatory power of such a model, our results may lead the way
to more sophisticated, realistic, and effective ranking and crawling
algorithms.Comment: 10 pages, 16 figures, 1 table - Long version of paper to appear in
Proceedings of the 21th ACM conference on Hypertext and Hypermedi
Recommended from our members
The THREAT-ARREST Cyber-Security Training Platform
Cyber security is always a main concern for critical infrastructures and nation-wide safety and sustainability. Thus, advanced cyber ranges and security training is becoming imperative for the involved organizations. This paper presets a cyber security training platform, called THREAT-ARREST. The various platform modules can analyze an organization’s system, identify the most critical threats, and tailor a training program to its personnel needs. Then, different training programmes are created based on the trainee types (i.e. administrator, simple operator, etc.), providing several teaching procedures and accomplishing diverse learning goals. One of the main novelties of THREAT-ARREST is the modelling of these programmes along with the runtime monitoring, management, and evaluation operations. The platform is generic. Nevertheless, its applicability in a smart energy case study is detailed
Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems
Computer networks are undergoing a phenomenal growth, driven by the rapidly
increasing number of nodes constituting the networks. At the same time, the
number of security threats on Internet and intranet networks is constantly
growing, and the testing and experimentation of cyber defense solutions
requires the availability of separate, test environments that best emulate the
complexity of a real system. Such environments support the deployment and
monitoring of complex mission-driven network scenarios, thus enabling the study
of cyber defense strategies under real and controllable traffic and attack
scenarios. In this paper, we propose a methodology that makes use of a
combination of techniques of network and security assessment, and the use of
cloud technologies to build an emulation environment with adjustable degree of
affinity with respect to actual reference networks or planned systems. As a
byproduct, starting from a specific study case, we collected a dataset
consisting of complete network traces comprising benign and malicious traffic,
which is feature-rich and publicly available
- …