140,022 research outputs found
Effects of time window size and placement on the structure of aggregated networks
Complex networks are often constructed by aggregating empirical data over
time, such that a link represents the existence of interactions between the
endpoint nodes and the link weight represents the intensity of such
interactions within the aggregation time window. The resulting networks are
then often considered static. More often than not, the aggregation time window
is dictated by the availability of data, and the effects of its length on the
resulting networks are rarely considered. Here, we address this question by
studying the structural features of networks emerging from aggregating
empirical data over different time intervals, focussing on networks derived
from time-stamped, anonymized mobile telephone call records. Our results show
that short aggregation intervals yield networks where strong links associated
with dense clusters dominate; the seeds of such clusters or communities become
already visible for intervals of around one week. The degree and weight
distributions are seen to become stationary around a few days and a few weeks,
respectively. An aggregation interval of around 30 days results in the stablest
similar networks when consecutive windows are compared. For longer intervals,
the effects of weak or random links become increasingly stronger, and the
average degree of the network keeps growing even for intervals up to 180 days.
The placement of the time window is also seen to affect the outcome: for short
windows, different behavioural patterns play a role during weekends and
weekdays, and for longer windows it is seen that networks aggregated during
holiday periods are significantly different.Comment: 19 pages, 11 figure
A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures
This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
From buildings to cities: techniques for the multi-scale analysis of urban form and function
The built environment is a significant factor in many urban processes, yet direct measures of built form are
seldom used in geographical studies. Representation and analysis of urban form and function could provide
new insights and improve the evidence base for research. So far progress has been slow due to limited data
availability, computational demands, and a lack of methods to integrate built environment data with
aggregate geographical analysis. Spatial data and computational improvements are overcoming some of
these problems, but there remains a need for techniques to process and aggregate urban form data. Here we
develop a Built Environment Model of urban function and dwelling type classifications for Greater
London, based on detailed topographic and address-based data (sourced from Ordnance Survey
MasterMap). The multi-scale approach allows the Built Environment Model to be viewed at fine-scales for
local planning contexts, and at city-wide scales for aggregate geographical analysis, allowing an improved
understanding of urban processes. This flexibility is illustrated in the two examples, that of urban function
and residential type analysis, where both local-scale urban clustering and city-wide trends in density and
agglomeration are shown. While we demonstrate the multi-scale Built Environment Model to be a viable
approach, a number of accuracy issues are identified, including the limitations of 2D data, inaccuracies in
commercial function data and problems with temporal attribution. These limitations currently restrict the
more advanced applications of the Built Environment Model
On the Potential of Generic Modeling for VANET Data Aggregation Protocols
In-network data aggregation is a promising communication mechanism to reduce bandwidth requirements of applications in vehicular ad-hoc networks (VANETs). Many aggregation schemes have been proposed, often with varying features. Most aggregation schemes are tailored to specific application scenarios and for specific aggregation operations. Comparative evaluation of different aggregation schemes is therefore difficult. An application centric view of aggregation does also not tap into the potential of cross application aggregation. Generic modeling may help to unlock this potential. We outline a generic modeling approach to enable improved comparability of aggregation schemes and facilitate joint optimization for different applications of aggregation schemes for VANETs. This work outlines the requirements and general concept of a generic modeling approach and identifies open challenges
Testing the spatial scale and the dynamic structure in regional models (a contribution to spatial econometric specification analysis)
This article addresses the problem of specification uncertainty in modeling spatial economic theories in stochastic form. It is ascertained that the traditional approach to spatial econometric modeling does not adequately deal with the type and extent of specification uncertainty commonly encountered in spatial economic analyses. Two alternative spatial econometric modeling procedures proposed in the literature are reviewed and shown to be suitable for analyzing systematically two sources of specification uncertainty, viz., the level of aggregation and the spatio-temporal dynamic structure in multiregional econometric models. The usefulness of one of these specification procedures is illustrated by the construction of a simple multiregional model for The Netherlands
Reconstructing the world trade multiplex: the role of intensive and extensive biases
In economic and financial networks, the strength of each node has always an
important economic meaning, such as the size of supply and demand, import and
export, or financial exposure. Constructing null models of networks matching
the observed strengths of all nodes is crucial in order to either detect
interesting deviations of an empirical network from economically meaningful
benchmarks or reconstruct the most likely structure of an economic network when
the latter is unknown. However, several studies have proved that real economic
networks and multiplexes are topologically very different from configurations
inferred only from node strengths. Here we provide a detailed analysis of the
World Trade Multiplex by comparing it to an enhanced null model that
simultaneously reproduces the strength and the degree of each node. We study
several temporal snapshots and almost one hundred layers (commodity classes) of
the multiplex and find that the observed properties are systematically well
reproduced by our model. Our formalism allows us to introduce the (static)
concept of extensive and intensive bias, defined as a measurable tendency of
the network to prefer either the formation of extra links or the reinforcement
of link weights, with respect to a reference case where only strengths are
enforced. Our findings complement the existing economic literature on (dynamic)
intensive and extensive trade margins. More in general, they show that
real-world multiplexes can be strongly shaped by layer-specific local
constraints
- …