82,220 research outputs found
Graph Annotations in Modeling Complex Network Topologies
The coarsest approximation of the structure of a complex network, such as the
Internet, is a simple undirected unweighted graph. This approximation, however,
loses too much detail. In reality, objects represented by vertices and edges in
such a graph possess some non-trivial internal structure that varies across and
differentiates among distinct types of links or nodes. In this work, we
abstract such additional information as network annotations. We introduce a
network topology modeling framework that treats annotations as an extended
correlation profile of a network. Assuming we have this profile measured for a
given network, we present an algorithm to rescale it in order to construct
networks of varying size that still reproduce the original measured annotation
profile.
Using this methodology, we accurately capture the network properties
essential for realistic simulations of network applications and protocols, or
any other simulations involving complex network topologies, including modeling
and simulation of network evolution. We apply our approach to the Autonomous
System (AS) topology of the Internet annotated with business relationships
between ASs. This topology captures the large-scale structure of the Internet.
In depth understanding of this structure and tools to model it are cornerstones
of research on future Internet architectures and designs. We find that our
techniques are able to accurately capture the structure of annotation
correlations within this topology, thus reproducing a number of its important
properties in synthetically-generated random graphs
Learning Correlations between Linguistic Indicators and Semantic Constraints: Reuse of Context-Dependent Descriptions of Entities
This paper presents the results of a study on the semantic constraints
imposed on lexical choice by certain contextual indicators. We show how such
indicators are computed and how correlations between them and the choice of a
noun phrase description of a named entity can be automatically established
using supervised learning. Based on this correlation, we have developed a
technique for automatic lexical choice of descriptions of entities in text
generation. We discuss the underlying relationship between the pragmatics of
choosing an appropriate description that serves a specific purpose in the
automatically generated text and the semantics of the description itself. We
present our work in the framework of the more general concept of reuse of
linguistic structures that are automatically extracted from large corpora. We
present a formal evaluation of our approach and we conclude with some thoughts
on potential applications of our method.Comment: 7 pages, uses colacl.sty and acl.bst, uses epsfig. To appear in the
Proceedings of the Joint 17th International Conference on Computational
Linguistics 36th Annual Meeting of the Association for Computational
Linguistics (COLING-ACL'98
Remote real-time monitoring of subsurface landfill gas migration
The cost of monitoring greenhouse gas emissions from landfill sites is of major concern for regulatory authorities. The current monitoring procedure is recognised as labour intensive, requiring agency inspectors to physically travel to perimeter borehole wells in rough terrain and manually measure gas concentration levels with expensive hand-held instrumentation. In this article we present a cost-effective and efficient system for remotely monitoring landfill subsurface migration of methane and carbon dioxide concentration levels. Based purely on an autonomous sensing architecture, the proposed sensing platform was capable of performing complex analytical measurements in situ and successfully communicating the data remotely to a cloud database. A web tool was developed to present the sensed data to relevant stakeholders. We report our experiences in deploying such an approach in the field over a period of approximately 16 months
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Crawling Facebook for Social Network Analysis Purposes
We describe our work in the collection and analysis of massive data describing the connections between participants to online social networks. Alternative approaches to social network data collection are defined and evaluated in practice, against the popular Facebook Web site. Thanks to our ad-hoc, privacy-compliant crawlers, two large samples, comprising millions of connections, have been collected; the data is anonymous and organized as an undirected graph. We describe a set of tools that we developed to analyze specific properties of such social-network graphs, i.e., among others, degree distribution, centrality measures, scaling laws and distribution of friendship.\u
DoWitcher: Effective Worm Detection and Containment in the Internet Core
Enterprise networks are increasingly offloading the responsibility for worm detection and containment to the carrier networks. However, current approaches to the zero-day worm detection problem such as those based on content similarity of packet payloads are not scalable to the carrier link speeds (OC-48 and up-wards). In this paper, we introduce a new system, namely DoWitcher, which in contrast to previous approaches is scalable as well as able to detect the stealthiest worms that employ low-propagation rates or polymorphisms to evade detection. DoWitcher uses an incremental approach toward worm detection: First, it examines the layer-4 traffic features to discern the presence of a worm anomaly; Next, it determines a flow-filter mask that can be applied to isolate the suspect worm flows and; Finally, it enables full-packet capture of only those flows that match the mask, which are then processed by a longest common subsequence algorithm to extract the worm content signature. Via a proof-of-concept implementation on a commercially available network analyzer processing raw packets from an OC-48 link, we demonstrate the capability of DoWitcher to detect low-rate worms and extract signatures for even the polymorphic worm
- …