518 research outputs found
Recommended from our members
On the relation between Transversal and Longitudinal Scaling in Cities
Given that a group of cities follows a scaling law connecting urban population with socio-economic or infrastructural metrics (transversal scaling), should we expect that each city would follow the same behavior over time (longitudinal scaling)? This assumption has important policy implications, although rigorous empirical tests have been so far hindered by the lack of suitable data. Here, we advance the debate by looking into the temporal evolution of the scaling laws for 5507 municipalities in Brazil. We focus on the relationship between population size and two urban variables, GDP and water network length, analyzing the time evolution of the system of cities as well as their individual trajectory. We find that longitudinal (individual) scaling exponents are city-specific, but they are distributed around an average value that approaches to the transversal scaling exponent when the data are decomposed to eliminate external factors, and when we only consider cities with a sufficiently large growth rate. Such results give support to the idea that the longitudinal dynamics is a micro-scaling version of the transversal dynamics of the entire urban system. Finally, we propose a mathematical framework that connects the microscopic level to global behavior, and, in all analyzed cases, we find good agreement between theoretical prediction and empirical evidence
Controlling percolation with limited resources
Connectivity - or the lack thereof - is crucial for the function of many
man-made systems, from financial and economic networks over epidemic spreading
in social networks to technical infrastructure. Often, connections are
deliberately established or removed to induce, maintain, or destroy global
connectivity. Thus, there has been a great interest in understanding how to
control percolation, the transition to large-scale connectivity. Previous work,
however, studied control strategies assuming unlimited resources. Here, we
depart from this unrealistic assumption and consider the effect of limited
resources on the effectiveness of control. We show that, even for scarce
resources, percolation can be controlled with an efficient intervention
strategy. We derive this strategy and study its implications, revealing a
discontinuous transition as an unintended side-effect of optimal control.Comment: 5 pages, 4 figures, additional supplemental material (19 pages
Delay Performance and Mixing Times in Random-Access Networks
We explore the achievable delay performance in wireless random-access
networks. While relatively simple and inherently distributed in nature,
suitably designed queue-based random-access schemes provide the striking
capability to match the optimal throughput performance of centralized
scheduling mechanisms in a wide range of scenarios. The specific type of
activation rules for which throughput optimality has been established, may
however yield excessive queues and delays.
Motivated by that issue, we examine whether the poor delay performance is
inherent to the basic operation of these schemes, or caused by the specific
kind of activation rules. We derive delay lower bounds for queue-based
activation rules, which offer fundamental insight in the cause of the excessive
delays. For fixed activation rates we obtain lower bounds indicating that
delays and mixing times can grow dramatically with the load in certain
topologies as well
Stable and unstable attractors in Boolean networks
Boolean networks at the critical point have been a matter of debate for many
years as, e.g., scaling of number of attractor with system size. Recently it
was found that this number scales superpolynomially with system size, contrary
to a common earlier expectation of sublinear scaling. We here point to the fact
that these results are obtained using deterministic parallel update, where a
large fraction of attractors in fact are an artifact of the updating scheme.
This limits the significance of these results for biological systems where
noise is omnipresent. We here take a fresh look at attractors in Boolean
networks with the original motivation of simplified models for biological
systems in mind. We test stability of attractors w.r.t. infinitesimal
deviations from synchronous update and find that most attractors found under
parallel update are artifacts arising from the synchronous clocking mode. The
remaining fraction of attractors are stable against fluctuating response
delays. For this subset of stable attractors we observe sublinear scaling of
the number of attractors with system size.Comment: extended version, additional figur
Selfish traffic allocation for server farms
We study the price of selfish routing in noncooperative networks like the Internet. In particular, we investigate the price of selfish routing using the price of anarchy (a.k.a. the coordination ratio) and other (e.g., bicriteria) measures in the recently introduced game theoretic parallel links network model of Koutsoupias and Papadimitriou. We generalize this model toward general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows: 1. We give an exact characterization of all cost functions having a bounded/unbounded price of anarchy. For example, the price of anarchy for cost functions describing the expected delay in queueing systems is unbounded. 2. We show that an unbounded price of anarchy implies an extremely high performance degradation under bicriteria measures. In fact, the price of selfish routing can be as high as a bandwidth degradation by a factor that is linear in the network size. 3. We separate the game theoretic (integral) allocation model from the (fractional) flow model by demonstrating that even a very small or negligible amount of integrality can lead to a dramatic performance degradation. 4. We unify recent results on selfish routing under different objectives by showing that an unbounded price of anarchy under the min-max objective implies an unbounded price of anarchy under the average cost objective and vice versa. Our special focus lies on cost functions describing the behavior of Web servers that can open only a limited number of Transmission Control Protocol (TCP) connections. In particular, we compare the performance of queueing systems that serve all incoming requests with servers that reject requests in case of overload. Our analysis indicates that all queueing systems without rejection cannot give any reasonable guarantee on the expected delay of requests under selfish routing even when the injected load is far away from the capacity of the system. In contrast, Web server farms that are allowed to reject requests can guarantee a high quality of service for every individual request stream even under relatively high injection rates
Inflated speedups in parallel simulations via malloc()
Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically
A Distributed Newton Method for Network Utility Maximization
Most existing work uses dual decomposition and subgradient methods to solve
Network Utility Maximization (NUM) problems in a distributed manner, which
suffer from slow rate of convergence properties. This work develops an
alternative distributed Newton-type fast converging algorithm for solving
network utility maximization problems with self-concordant utility functions.
By using novel matrix splitting techniques, both primal and dual updates for
the Newton step can be computed using iterative schemes in a decentralized
manner with limited information exchange. Similarly, the stepsize can be
obtained via an iterative consensus-based averaging scheme. We show that even
when the Newton direction and the stepsize in our method are computed within
some error (due to finite truncation of the iterative schemes), the resulting
objective function value still converges superlinearly to an explicitly
characterized error neighborhood. Simulation results demonstrate significant
convergence rate improvement of our algorithm relative to the existing
subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201
- …