381 research outputs found
Self-Organizing Flows in Social Networks
Social networks offer users new means of accessing information, essentially
relying on "social filtering", i.e. propagation and filtering of information by
social contacts. The sheer amount of data flowing in these networks, combined
with the limited budget of attention of each user, makes it difficult to ensure
that social filtering brings relevant content to the interested users. Our
motivation in this paper is to measure to what extent self-organization of the
social network results in efficient social filtering. To this end we introduce
flow games, a simple abstraction that models network formation under selfish
user dynamics, featuring user-specific interests and budget of attention. In
the context of homogeneous user interests, we show that selfish dynamics
converge to a stable network structure (namely a pure Nash equilibrium) with
close-to-optimal information dissemination. We show in contrast, for the more
realistic case of heterogeneous interests, that convergence, if it occurs, may
lead to information dissemination that can be arbitrarily inefficient, as
captured by an unbounded "price of anarchy". Nevertheless the situation differs
when users' interests exhibit a particular structure, captured by a metric
space with low doubling dimension. In that case, natural autonomous dynamics
converge to a stable configuration. Moreover, users obtain all the information
of interest to them in the corresponding dissemination, provided their budget
of attention is logarithmic in the size of their interest set
Algorithms for the Analysis of Spatio-Temporal Data from Team Sports
Modern object tracking systems are able to simultaneously record trajectoriesâsequences of time-stamped location pointsâfor large numbers of objects with high frequency and accuracy. The availability of trajectory datasets has resulted in a consequent demand for algorithms and tools to extract information from these data. In this thesis, we present several contributions intended to do this, and in particular, to extract information from trajectories tracking football (soccer) players during matches. Football player trajectories have particular properties that both facilitate and present challenges for the algorithmic approaches to information extraction. The key property that we look to exploit is that the movement of the players reveals information about their objectives through cooperative and adversarial coordinated behaviour, and this, in turn, reveals the tactics and strategies employed to achieve the objectives. While the approaches presented here naturally deal with the application-specific properties of football player trajectories, they also apply to other domains where objects are tracked, for example behavioural ecology, traffic and urban planning
Recommended from our members
Diffusion of shared goods in consumer coalitions. An agent-based model
This paper focuses on the process of coalition formation conditioning the common decision to adopt a shared good, which cannot be afforded by an average single consumer and whose use cannot be exhausted by any single consumer. An agent based model is developed to study the interplay between these two processes: coalition formation and diffusion of shared goods. Coalition formation is modelled in an evolutionary game theoretic setting, while adoption uses elements from both the Bass and the threshold models. Coalitions formation sets the conditions for adoption, while diffusion influences the consequent formation of coalitions. Results show that both coalitions and diffusion are subject to network effects and have an impact on the information flow though the population of consumers. Large coalitions are preferred over small ones since individual cost is lower, although it increases if higher quantities are purchased collectively. The paper concludes by connecting the model conceptualisation to the on-going discussion of diffusion of sustainable goods, discussing related policy implications
On a Bounded Budget Network Creation Game
We consider a network creation game in which each player (vertex) has a fixed
budget to establish links to other players. In our model, each link has unit
price and each agent tries to minimize its cost, which is either its local
diameter or its total distance to other players in the (undirected) underlying
graph of the created network. Two versions of the game are studied: in the MAX
version, the cost incurred to a vertex is the maximum distance between the
vertex and other vertices, and in the SUM version, the cost incurred to a
vertex is the sum of distances between the vertex and other vertices. We prove
that in both versions pure Nash equilibria exist, but the problem of finding
the best response of a vertex is NP-hard. We take the social cost of the
created network to be its diameter, and next we study the maximum possible
diameter of an equilibrium graph with n vertices in various cases. When the sum
of players' budgets is n-1, the equilibrium graphs are always trees, and we
prove that their maximum diameter is Theta(n) and Theta(log n) in MAX and SUM
versions, respectively. When each vertex has unit budget (i.e. can establish
link to just one vertex), the diameter of any equilibrium graph in either
version is Theta(1). We give examples of equilibrium graphs in the MAX version,
such that all vertices have positive budgets and yet the diameter is
Omega(sqrt(log n)). This interesting (and perhaps counter-intuitive) result
shows that increasing the budgets may increase the diameter of equilibrium
graphs and hence deteriorate the network structure. Then we prove that every
equilibrium graph in the SUM version has diameter 2^O(sqrt(log n)). Finally, we
show that if the budget of each player is at least k, then every equilibrium
graph in the SUM version is k-connected or has diameter smaller than 4.Comment: 28 pages, 3 figures, preliminary version appeared in SPAA'1
The flow of money and interests in policymaking
This dissertation is comprised of three papers that analyze the relationship between political money, elite interests and policies. Individual papers in this work are connected through this overarching theme and the methodology that is used. Each paper employs statistical methods on large-scale datasets with an emphasis on network analysis. The first paper investigates the relationship between the strength of elite connections and the success of renewable energy and emission reduction policies. Based on an original dataset created from social media accounts of the ministers in 34 countries, this analysis uses a stochastic block model and modularity analysis to compare the strength of connections between different types of elites. The quantitative analysis is complemented by in-depth interviews conducted in seven European countries. The second paper explores the relationship between socio-political capital of state-level American politicians and their agenda holding power in legislation. Using a very extensive dataset on campaign contribution records and state-level bill proposals in the United States, this paper employs survival analysis to explore the aforementioned connection. The third paper is a quantitative description of the large datasets on federal- and state-level campaign contribution records and state-level bill proposals. Using visualization, network analysis, and clustering, the last part of the dissertation uncovers some of the connections between big political donors, parties, private sector, and legislation. The last paper in the dissertation also contains a typological identification section for donors and lawmakers. The goal of the dissertation is to expand the literature on elites, to explore what new stories can be told about political money in the United States, and to make use of large-scale datasets for more conclusive arguments in American politics and policy literature
Sustaining Glasgow's Urban Networks: the Link Communities of Complex Urban Systems
As cities grow in population size and became more crowded (UN DESA, 2018), the main future challenges around the world will remain to be accommodating the growing urban population while drastically reducing environmental pressure. Contemporary urban agglomerations (large or small) constantly impose burden on the natural environment by conveying ecosystem services to close and distant places, through coupled human nature [infrastructure] systems (CHANS). Toblerâs first law in geography (1970) that states that âeverything is related to everything else, but near things are more related than distant thingsâ is now challenged by globalization.
When this law was first established, the hypothesis referred to geological processes (Campbell and Shin, 2012, p.194) that were predominantly observed in pre-globalized economy, where freight was costly and mainly localized (Zhang et al., 2018). With the recent advances and modernisation made in transport technologies, most of them in the sea and air transportation (Zhang et al., 2018) and the growth of cities in population, natural resources and bi-products now travel great distances to infiltrate cities (Neuman, 2006) and satisfy human demands. Technical modernisation and the global hyperconnectivity of human interactions and trading, in the last thirty years alone resulted with staggering 94 per cent growth of resource extraction and consumption (Giljum et al., 2015). Local geographies (Kennedy, Cuddihy and Engel-Yan, 2007) will remain affected by global urbanisation (Giljum et al., 2015), and as a corollary, the operational inefficiencies of their local infrastructure networks, will contribute even more to the issues of environmental unsustainability on a global scale.
Another challenge for future city-regions is the equity of public infrastructure services and policy creation that promote the same (Neuman and Hull, 2009). Public infrastructure services refer to services provisioned by networked infrastructure, which are subject to both public obligation and market rules. Therefore, their accessibility to all citizens needs to be safeguarded. The disparity of growth between networked infrastructure and socio-economic dynamics affects the sustainable assimilation and equal access to infrastructure in various districts in cities, rendering it as a privilege. Yet, the empirical evidence of whether the place of residence acts as a disadvantage to public service access and use, remains rather scarce (Clifton et al., 2016). The European Union recognized (EU, 2011) the issue of equality in accessibility (i.e. equity) critical for territorial cohesion and sustainable development across districts, municipalities and regions with diverse economic performance.
Territorial cohesion, formally incorporated into the Treaty of Lisbon, now steers the policy frameworks of territorial development within the Union. Subsequently, the European Union developed a policy paradigm guided by equal access (Clifton et al., 2016) to public infrastructure services, considering their accessibility as instrumental aspect in achieving territorial cohesion across and within its member states.
A corollary of increasing the equity to public infrastructure services among growing global population is the potential increase in environmental pressure they can impose, especially if this pressure is not decentralised and surges at unsustainable rate (Neuman, 2006). This danger varies across countries and continents, and is directly linked to the increase of urban population due to; [1] improved quality of life and increased life expectancy and/or [2] urban in-migration of rural population and/or [3] global political or economic immigration.
These three rising urban trends demand new approaches to reimagine planning and design practices that foster infrastructure equity, whilst delivering environmental justice. Therefore, this research explores in depth the nature of growth of networked infrastructure (Graham and Marvin, 2001) as a complex system and its disparity from the socio-economic growth (or decline) of Glasgow and Clyde Valley city-region. The results of this research gain new understanding in the potential of using emerging tools from network science for developing optimization strategy that supports more cecentralized, efficient, fair and (as an outcome) sustainable enlargement of urban infrastructure, to accommodate new and empower current residents of the city.
Applying the novel link clustering community detection algorithm (Ahn et al., 2010) in this thesis I have presented the potential for better understanding the complexity behind the urban system of networked infrastructure, through discovering their overlapping communities. As I will show in the literature review (Chapter 2), the long standing tradition of centralised planning practice relying on zoning and infiltrating infrastructure, left us with urban settlements which are failing to respond to the environmental pressure and the socio-economic inequalities. Building on the myriad of knowledge from planners, geographers, sociologists and computer scientists, I developed a new element (i.e. link communities) within the theory of urban studies that defines cities as complex systems. After, I applied a method borrowed from the study of complex networks to unpack their basic elements. Knowing the link (i.e. functional, or overlapping) communities of metropolitan Glasgow enabled me to evaluate the current level of communities interconnectedness and reveal the gaps as well as the potentials for improving the studied systemâs performance.
The complex urban system in metropolitan Glasgow was represented by its networked infrastructure, which essentially was a system of distinct sub-systems, one of them mapped by a physical and the other one by a social graph. The conceptual framework for this methodological approach was formalised from the extensively reviewed literature and methods utilising network science tools to detect community structure in complex networks. The literature review led to constructing a hypothesis claiming that the efficiency of the physical networkâs topology is achieved through optimizing the number of nodes with high betweenness centrality, while the efficiency of the logical networkâs topology is achieved by optimizing the number of links with high edge betweenness.
The conclusion from the literature review presented through the discourse on to the primal problem in 7.4.1, led to modelling the two network topologies as separate graphs. The bipartite graph of their primal syntax was mirrored to be symmetrical and converted to dual. From the dual syntax I measured the complete accessibility (i.e. betweenness centrality) of the entire area and not only of the streets.
Betweenness centrality of a node measures the number of shortest paths that pass through the node connecting pairs of nodes. The betweenness centrality is same as the integration of streets in space syntax, where the streets are analysed in their dual syntax representation. Street integration is the number of intersections the street shares with other streets and a high value means high accessibility.
Edges with high betweenness are shared between strong communities. Based on the theoretical underpinnings of the networkâs modularity and community structure analysed herein, it can be concluded that a complex network that is both robust and efficient (and in urban planning terminology âsustainableâ) is consisted of numerous strong communities connected with each other by optimal number of links with high edge betweenness. To get this insight, the study detected the edge cut-set and vertex cut-set of the complex network. The outcome was a statistical model developed in the open source software R (Ihaka and Gentleman, 1996). The model empirical detects the networkâs overlapping communities, determining the current sustainability of its physical and logical topologies.
Initially, an assumption was that the number of communities within the infrastructure (physical) network layer were different from the one in the logical. They were detected using the Louvain method that performs graph partitioning on the hierarchical streets structure. Further, the number of communities in the relational network layer (i.e. accessibility to locations) was detected based on the OD accessibility matrix established from the functional dependency between the household locations and predefined points of interest. The communities from the graph of the ârelational layer' were discovered with the single-link hierarchical clustering algorithm. The number of communities observed in the physical and the logical topologies of the eight shires significantly deviated
Institutional Vulnerability and Governance of Disaster Risk Reduction: Macro, Meso and Micro Scale Assessment : With Case Studies from Indonesia
This PhD research addresses two central questions: How should institutional vulnerability that shapes disaster risks and disaster reduction policy be assessed? How does the quality of institutions and governance influence the level of disaster risk and disaster reduction policy? In this dissertation, institutional vulnerability at global and local levels is analyzed and an answer to such questions is pursued. General vulnerability assessment frameworks on the global scale and local scale have limitations in measuring how and to what extent institutions in all countries can reduce risks. This PhD dissertation is pioneering in that it assesses global institutional vulnerability using an index-based approach on a national/local scale by employing mixed methods such as social network analysis complemented by qualitative approaches (e.g. participant observation and literature reviews) and quantitative approaches (simple regression, scatter plots and simple descriptive statistics). In this dissertation, it is hypothesized that the countries with greater institutional quality tend to have better governance over disaster risks, which leads to a higher level of disaster risk resilience. Risk assessors have often overlooked institutions. In fact, when one assesses vulnerability, for example, social/human vulnerability (such as using health, education, human development indices), physical vulnerability (quality of physical housing and infrastructure), economic vulnerability (income, economic production), and environmental vulnerability (land degradation, environmental quality indicators), the assessor essentially measures the âoutcomesâ of the institutions rather than the institutions directly. Institutional vulnerability to disaster risk is defined here as both the context and the process by which formal institutions (regulations, rule of law, constitutions, codes, bureaucracy, etc.), informal institutions (culture, norms, traditions, etc.), and governance are either too weak to provide protection against disaster risk or are ignorant of their duty to provide safety and human security. Central to this argument is the concept that institutions are designed, among others, to reduce risks. In this research, the focus is on disaster risks. This suggests a hypothesis that nations will fail to reduce risks owing to institutional and governance factors that modify their vulnerabilities and resilience. The findings show that both qualitative and quantitative methods at different scales of governance can assess institutional vulnerability and the governance of disaster risk reduction. At a global level, a quantitative approach to measuring institutional quality and governance disaster risk reduction is possible thanks to recent global data on countriesâ implementation of the Hyogo Framework for Action; however, more efforts are required in the future. At the meso- and microlevels, this work describes the history of institutions for disaster risk management in Indonesia from the colonial period until the present challenges of decentralized governance. The main message is as follows: without considering institutions, institutional quality, and specific governance of disaster reduction at macro-, meso-. and microscales, disaster risk reduction will not be sustainably implemented.Institutionelle VulnerabilitĂ€t und Beeinflussung der QualitĂ€t von Institutionen und Governance auf den Grad der Katastrophenvorsorge auf globaler und lokaler Ebene Diese Doktorarbeit befasst sich mit zwei zentralen Fragen: Wie sollte institutionelle VulnerabilitĂ€t von Massnahmen zu Katastrophenrisiken und -vorsorge beurteilt werden? Wie beeinflusst die QualitĂ€t von Institutionen und Governance den Grad der Katastrophenvorsorge? Diese Dissertation analysiert institutionelle VulnerabilitĂ€t auf globaler und lokaler Ebene, um eine Antwort auf diese Fragen zu geben. Allgemeine Beurteilungssysteme von VulnerabilitĂ€t auf globaler und lokaler Ebene sind in ihrer Aussagekraft darĂŒber begrenzt, wie und in welchem Umfang Institutionen in allen LĂ€ndern Risiken tatsĂ€chlich reduzieren können. Diese Dissertation ist eine grundlegende Arbeit dahingehend, indem sie globale institutionelle VulnerabilitĂ€t mittels eines Index-basierten Ansatzes auf nationaler / lokaler Ebene misst ergĂ€nzt durch gemischte Methoden wie soziale Netzwerkanalyse sowie qualitative (z.B. teilnehmende Beobachtung und Literaturrecherchen) und quantitative AnsĂ€tze (z.B. einfache Regression, Scatter-Plot, einfache deskriptive Statistik). In dieser Disssertation wird die Hypothese aufgestellt, dass die LĂ€nder mit der höchsten institutionellen QualitĂ€t eine bessere Governance von Katastrophenrisiken haben, was zu einer höheren Widerstandskraft gegen Katastrophenrisiken fĂŒhrt. Risiko-Assessoren haben oftmals Institutionen ĂŒbersehen. Im Falle der Messung von VulnerabilitĂ€t, z.B. soziale / menschliche VulnerabilitĂ€t (wie z.B. Gesundheit, Bildung, Indizes der menschlichen Entwicklung), physische VulnerabilitĂ€t (QualitĂ€t der physischen Behausung / GebĂ€ude und Infrastruktur), ökonomische VulnerabilitĂ€t (Einkommen, Wirtschaftsproduktion) und UmweltvulnerabilitĂ€t(Landverödung, UmweltqualitĂ€tindikatoren), misst ein Assessor eigentlich nur das "Resultat" von Institutionen, aber nicht die Institution direkt. Institutionelle VulnerabilitĂ€t gegenĂŒber Katastrophenrisiken wird hier definiert als der Kontext wie auch der Prozess, durch die formale Insitutionen (Verordnungen, Gesetz, Verfassungen, Vorschriften, Verwaltung usw.), informelle Institutionen (Kultur, Normen, Traditionen usw.) sowie Governance so geschwĂ€cht werden, dass sie entweder keinen Schutz gegenĂŒber Naturkatastrophen bieten oder zu Ignoranz gegenĂŒber ihrer Aufgabe fĂŒhren, fĂŒr Sicherheit und menschlichen Schutz zu sorgen. Ein zentrales Argument ist die Vorstellung, dass Institutionen u.a dafĂŒr gestaltet wurden, um Risiken zu reduzieren. In dieser Forschungsarbeit wird der Schwerpunkt auf Katastrophenrisiken / Naturkatastrophen gelegt. Dies fĂŒhrte zu der Hypothese, dass Nationen nicht in der Lage sind, aufgrund institutioneller Faktoren und Governance, die ihre VulnerabilitĂ€t und FĂ€higkeit zur Abpufferung Ă€ndern, Risiken zu reduzieren. Die Ergebnisse zeigen, dass qualitative sowie quantitative Methoden auf verschiedenen Ebenen der Governance institutionelle VulnerabilitĂ€t und Governance der Katastrophenvorsorge messen können. Auf globaler Ebene ist die Anwendung eines quantitativen Ansatzes zur Messung der QualitĂ€t von Institutionen und Governance zur Reduzierung von Naturkatastrophen möglich dank der zur VerfĂŒgung stehenden globalen Daten aus LĂ€ndern, die das Hyogo Framework for Action eingesetzt haben. Trotzdem sind stĂ€rkere Anstrengungen in der Zukunft nötig. Auf der Meso- und Mikroebene beschreibt diese Arbeit die historische Entwicklung von Institutionen zur Katastrophenvorsorge in Indonesien von der Kolonialzeit bis zu den aktuellen Herausforderungen einer dezentralisierten Verwaltungsstruktur. Die wichtigste Aussage ist die Tatsache, dass Katastrophenvorsorge nicht nachhaltig implementiert werden kann, ohne Insitutionen, die QualitĂ€t von Institutionen sowie die spezifische Governance der Risikoreduktion auf der Makro-, Meso- und Mikroebene zu berĂŒcksichtigen
- âŠ