7,413 research outputs found

    Organic Design of Massively Distributed Systems: A Complex Networks Perspective

    Full text link
    The vision of Organic Computing addresses challenges that arise in the design of future information systems that are comprised of numerous, heterogeneous, resource-constrained and error-prone components or devices. Here, the notion organic particularly highlights the idea that, in order to be manageable, such systems should exhibit self-organization, self-adaptation and self-healing characteristics similar to those of biological systems. In recent years, the principles underlying many of the interesting characteristics of natural systems have been investigated from the perspective of complex systems science, particularly using the conceptual framework of statistical physics and statistical mechanics. In this article, we review some of the interesting relations between statistical physics and networked systems and discuss applications in the engineering of organic networked computing systems with predictable, quantifiable and controllable self-* properties.Comment: 17 pages, 14 figures, preprint of submission to Informatik-Spektrum published by Springe

    Special Session on Industry 4.0

    Get PDF
    No abstract available

    Sustaining Glasgow's Urban Networks: the Link Communities of Complex Urban Systems

    Get PDF
    As cities grow in population size and became more crowded (UN DESA, 2018), the main future challenges around the world will remain to be accommodating the growing urban population while drastically reducing environmental pressure. Contemporary urban agglomerations (large or small) constantly impose burden on the natural environment by conveying ecosystem services to close and distant places, through coupled human nature [infrastructure] systems (CHANS). Tobler’s first law in geography (1970) that states that “everything is related to everything else, but near things are more related than distant things” is now challenged by globalization. When this law was first established, the hypothesis referred to geological processes (Campbell and Shin, 2012, p.194) that were predominantly observed in pre-globalized economy, where freight was costly and mainly localized (Zhang et al., 2018). With the recent advances and modernisation made in transport technologies, most of them in the sea and air transportation (Zhang et al., 2018) and the growth of cities in population, natural resources and bi-products now travel great distances to infiltrate cities (Neuman, 2006) and satisfy human demands. Technical modernisation and the global hyperconnectivity of human interactions and trading, in the last thirty years alone resulted with staggering 94 per cent growth of resource extraction and consumption (Giljum et al., 2015). Local geographies (Kennedy, Cuddihy and Engel-Yan, 2007) will remain affected by global urbanisation (Giljum et al., 2015), and as a corollary, the operational inefficiencies of their local infrastructure networks, will contribute even more to the issues of environmental unsustainability on a global scale. Another challenge for future city-regions is the equity of public infrastructure services and policy creation that promote the same (Neuman and Hull, 2009). Public infrastructure services refer to services provisioned by networked infrastructure, which are subject to both public obligation and market rules. Therefore, their accessibility to all citizens needs to be safeguarded. The disparity of growth between networked infrastructure and socio-economic dynamics affects the sustainable assimilation and equal access to infrastructure in various districts in cities, rendering it as a privilege. Yet, the empirical evidence of whether the place of residence acts as a disadvantage to public service access and use, remains rather scarce (Clifton et al., 2016). The European Union recognized (EU, 2011) the issue of equality in accessibility (i.e. equity) critical for territorial cohesion and sustainable development across districts, municipalities and regions with diverse economic performance. Territorial cohesion, formally incorporated into the Treaty of Lisbon, now steers the policy frameworks of territorial development within the Union. Subsequently, the European Union developed a policy paradigm guided by equal access (Clifton et al., 2016) to public infrastructure services, considering their accessibility as instrumental aspect in achieving territorial cohesion across and within its member states. A corollary of increasing the equity to public infrastructure services among growing global population is the potential increase in environmental pressure they can impose, especially if this pressure is not decentralised and surges at unsustainable rate (Neuman, 2006). This danger varies across countries and continents, and is directly linked to the increase of urban population due to; [1] improved quality of life and increased life expectancy and/or [2] urban in-migration of rural population and/or [3] global political or economic immigration. These three rising urban trends demand new approaches to reimagine planning and design practices that foster infrastructure equity, whilst delivering environmental justice. Therefore, this research explores in depth the nature of growth of networked infrastructure (Graham and Marvin, 2001) as a complex system and its disparity from the socio-economic growth (or decline) of Glasgow and Clyde Valley city-region. The results of this research gain new understanding in the potential of using emerging tools from network science for developing optimization strategy that supports more cecentralized, efficient, fair and (as an outcome) sustainable enlargement of urban infrastructure, to accommodate new and empower current residents of the city. Applying the novel link clustering community detection algorithm (Ahn et al., 2010) in this thesis I have presented the potential for better understanding the complexity behind the urban system of networked infrastructure, through discovering their overlapping communities. As I will show in the literature review (Chapter 2), the long standing tradition of centralised planning practice relying on zoning and infiltrating infrastructure, left us with urban settlements which are failing to respond to the environmental pressure and the socio-economic inequalities. Building on the myriad of knowledge from planners, geographers, sociologists and computer scientists, I developed a new element (i.e. link communities) within the theory of urban studies that defines cities as complex systems. After, I applied a method borrowed from the study of complex networks to unpack their basic elements. Knowing the link (i.e. functional, or overlapping) communities of metropolitan Glasgow enabled me to evaluate the current level of communities interconnectedness and reveal the gaps as well as the potentials for improving the studied system’s performance. The complex urban system in metropolitan Glasgow was represented by its networked infrastructure, which essentially was a system of distinct sub-systems, one of them mapped by a physical and the other one by a social graph. The conceptual framework for this methodological approach was formalised from the extensively reviewed literature and methods utilising network science tools to detect community structure in complex networks. The literature review led to constructing a hypothesis claiming that the efficiency of the physical network’s topology is achieved through optimizing the number of nodes with high betweenness centrality, while the efficiency of the logical network’s topology is achieved by optimizing the number of links with high edge betweenness. The conclusion from the literature review presented through the discourse on to the primal problem in 7.4.1, led to modelling the two network topologies as separate graphs. The bipartite graph of their primal syntax was mirrored to be symmetrical and converted to dual. From the dual syntax I measured the complete accessibility (i.e. betweenness centrality) of the entire area and not only of the streets. Betweenness centrality of a node measures the number of shortest paths that pass through the node connecting pairs of nodes. The betweenness centrality is same as the integration of streets in space syntax, where the streets are analysed in their dual syntax representation. Street integration is the number of intersections the street shares with other streets and a high value means high accessibility. Edges with high betweenness are shared between strong communities. Based on the theoretical underpinnings of the network’s modularity and community structure analysed herein, it can be concluded that a complex network that is both robust and efficient (and in urban planning terminology ‘sustainable’) is consisted of numerous strong communities connected with each other by optimal number of links with high edge betweenness. To get this insight, the study detected the edge cut-set and vertex cut-set of the complex network. The outcome was a statistical model developed in the open source software R (Ihaka and Gentleman, 1996). The model empirical detects the network’s overlapping communities, determining the current sustainability of its physical and logical topologies. Initially, an assumption was that the number of communities within the infrastructure (physical) network layer were different from the one in the logical. They were detected using the Louvain method that performs graph partitioning on the hierarchical streets structure. Further, the number of communities in the relational network layer (i.e. accessibility to locations) was detected based on the OD accessibility matrix established from the functional dependency between the household locations and predefined points of interest. The communities from the graph of the ‘relational layer' were discovered with the single-link hierarchical clustering algorithm. The number of communities observed in the physical and the logical topologies of the eight shires significantly deviated

    Generalized Network Dismantling

    Full text link
    Finding the set of nodes, which removed or (de)activated can stop the spread of (dis)information, contain an epidemic or disrupt the functioning of a corrupt/criminal organization is still one of the key challenges in network science. In this paper, we introduce the generalized network dismantling problem, which aims to find the set of nodes that, when removed from a network, results in a network fragmentation into subcritical network components at minimum cost. For unit costs, our formulation becomes equivalent to the standard network dismantling problem. Our non-unit cost generalization allows for the inclusion of topological cost functions related to node centrality and non-topological features such as the price, protection level or even social value of a node. In order to solve this optimization problem, we propose a method, which is based on the spectral properties of a novel node-weighted Laplacian operator. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens new directions in understanding the vulnerability and robustness of complex systems.Comment: 6 pages, 5 figure

    On the genericity properties in networked estimation: Topology design and sensor placement

    Full text link
    In this paper, we consider networked estimation of linear, discrete-time dynamical systems monitored by a network of agents. In order to minimize the power requirement at the (possibly, battery-operated) agents, we require that the agents can exchange information with their neighbors only \emph{once per dynamical system time-step}; in contrast to consensus-based estimation where the agents exchange information until they reach a consensus. It can be verified that with this restriction on information exchange, measurement fusion alone results in an unbounded estimation error at every such agent that does not have an observable set of measurements in its neighborhood. To over come this challenge, state-estimate fusion has been proposed to recover the system observability. However, we show that adding state-estimate fusion may not recover observability when the system matrix is structured-rank (SS-rank) deficient. In this context, we characterize the state-estimate fusion and measurement fusion under both full SS-rank and SS-rank deficient system matrices.Comment: submitted for IEEE journal publicatio

    Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures

    Get PDF
    There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    corecore