16,372 research outputs found

    Core congestion is inherent in hyperbolic networks

    Full text link
    We investigate the impact the negative curvature has on the traffic congestion in large-scale networks. We prove that every Gromov hyperbolic network GG admits a core, thus answering in the positive a conjecture by Jonckheere, Lou, Bonahon, and Baryshnikov, Internet Mathematics, 7 (2011) which is based on the experimental observation by Narayan and Saniee, Physical Review E, 84 (2011) that real-world networks with small hyperbolicity have a core congestion. Namely, we prove that for every subset XX of vertices of a δ\delta-hyperbolic graph GG there exists a vertex mm of GG such that the disk D(m,4δ)D(m,4 \delta) of radius 4δ4 \delta centered at mm intercepts at least one half of the total flow between all pairs of vertices of XX, where the flow between two vertices x,yXx,y\in X is carried by geodesic (or quasi-geodesic) (x,y)(x,y)-paths. A set SS intercepts the flow between two nodes xx and yy if SS intersect every shortest path between xx and yy. Differently from what was conjectured by Jonckheere et al., we show that mm is not (and cannot be) the center of mass of XX but is a node close to the median of XX in the so-called injective hull of XX. In case of non-uniform traffic between nodes of XX (in this case, the unit flow exists only between certain pairs of nodes of XX defined by a commodity graph RR), we prove a primal-dual result showing that for any ρ>5δ\rho>5\delta the size of a ρ\rho-multi-core (i.e., the number of disks of radius ρ\rho) intercepting all pairs of RR is upper bounded by the maximum number of pairwise (ρ3δ)(\rho-3\delta)-apart pairs of RR

    Approximation Algorithms for Polynomial-Expansion and Low-Density Graphs

    Full text link
    We study the family of intersection graphs of low density objects in low dimensional Euclidean space. This family is quite general, and includes planar graphs. We prove that such graphs have small separators. Next, we present efficient (1+ε)(1+\varepsilon)-approximation algorithms for these graphs, for Independent Set, Set Cover, and Dominating Set problems, among others. We also prove corresponding hardness of approximation for some of these optimization problems, providing a characterization of their intractability in terms of density

    Optimal monetary policy in an estimated DSGE for the euro area

    Get PDF
    The objective of this paper is to examine the main features of optimal monetary policy within a micro-founded macroeconometric framework. First, using Bayesian techniques, we estimate a medium scale closed economy DSGE for the euro area. Then, we study the properties of the Ramsey allocation through impulse response, variance decomposition and counterfactual analysis. In particular, we show that, controlling for the zero lower bound constraint, does not seem to limit the stabilization properties of optimal monetary policy. the Ramsey allocation reasonably well. Such optimal simple operational rules seem to react specifically to nominal wage inflation. Overall, the Ramsey policy together with its simple rule approximations seem to deliver consistent policy messages and may constitute some useful normative benchmarks within medium to large scale estimated DSGE framework. prove the economic micro-foundation and the econometric identification of the structural disturbances. We also present simple monetary policy rules which can “approximate” and implement However, this normative analysis based on estimated models reinforces the need to improve the economic micro-foundation and the econometric identification of the structural disburbances. JEL Classification: E4, E5Bayesian estimation, DSGE Models, monetary policy, Welfare calculations

    Fixed-Parameter Algorithms for Unsplittable Flow Cover

    Get PDF

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Household liquidity and incremental financing decisions:theory and evidence

    Get PDF
    In this paper we develop a stochastic model for household liquidity. In the model, the optimal liquidity policy takes the form of a liquidity range. Subsequently, we use the model to calibrate the upper bound of the predicted liquidity range. Equipped with knowledge about the relevant control barriers, we run a series of empirical tests on a panel data set of Dutch households covering the period 1992-2007. The results broadly validate our theoretical predictions that households (i) exhaust most of their short-term liquid assets prior to increasing net debt, and (ii) reduce outstanding net debt at the optimally selected upper liquidity barrier. However, a small minority of households appear to act sub-optimally. Poor and vulnerable households rely too frequently on expensive forms of credit (such as overdrafts) hereby incurring substantial amounts of fees and fixed borrowing costs. Elderly households and people on social benefits tend to accumulate too much liquidity. Finally, some households take on expensive short-term credit while having substantial amounts of low-yielding liquid assets

    Why Do People Buy Lottery Products?

    Get PDF
    This paper examines the lottery sales of 99 countries by type of product in order to analyze the socioeconomic and demographic features that help to explain gambling consumption around the world. With a panel data analysis covering 13 years, this study explains the variation of a country’s per-capita lottery sales in general and by type of game: lotto, numbers, keno, toto, draw and instant. This paper found that the richer countries spend more than the poorer countries and the income elasticity of the demand for lottery products is greater than one. So, we may assert that there is an implicit progressivity tax in games when we consider countries rather than households. Several studies have also revealed an inverse relationship between education and the consumption of lottery products. This paper confirms this hypothesis for lotteries in general, but not for the specific lottery products.Gambling; Lotteries; Religiosity; Education; Culture; Age; Panel Data.

    Quantitative goals for monetary policy

    Get PDF
    We study empirically the macroeconomic effects of an explicit de jure quantitative goal for monetary policy. Quantitative goals take three forms: exchange rates, money growth rates, and inflation targets. We analyze the effects on inflation of both having a quantitative target, and of hitting a declared target; we also consider effects on output volatility. Our empirical work uses an annual data set covering 42 countries between 1960 and 2000, and takes account of other determinants of inflation (such as fiscal policy, the business cycle, and openness to international trade), and the endogeneity of the monetary policy regime. We find that both having and hitting quantitative targets for monetary policy is systematically and robustly associated with lower inflation. The exact form of the monetary target matters somewhat (especially for the sustainability of the monetary regime), but is less important than having some quantitative target. Successfully achieving a quantitative monetary goal is also associated with less volatile output. JEL Classification: E52business cycle, exchange, Growth, inflation, Money, rate, TARGET, transparency

    Discriminating Codes in Geometric Setups

    Get PDF
    We study geometric variations of the discriminating code problem. In the \emph{discrete version} of the problem, a finite set of points PP and a finite set of objects SS are given in Rd\mathbb{R}^d. The objective is to choose a subset SSS^* \subseteq S of minimum cardinality such that for each point piPp_i \in P, the subset SiSS_i^* \subseteq S^* covering pip_i satisfies SiS_i^*\neq \emptyset, and each pair pi,pjPp_i,p_j \in P, iji \neq j, we have SiSjS_i^* \neq S_j^*. In the \emph{continuous version} of the problem, the solution set SS^* can be chosen freely among a (potentially infinite) class of allowed geometric objects. In the 1-dimensional case (d=1d=1), the points in PP are placed on a horizontal line LL, and the objects in SS are finite-length line segments aligned with LL (called intervals). We show that the discrete version of this problem is NP-complete. This is somewhat surprising as the continuous version is known to be polynomial-time solvable. Still, for the 1-dimensional discrete version, we design a polynomial-time 22-approximation algorithm. We also design a PTAS for both discrete and continuous versions in one dimension, for the restriction where the intervals are all required to have the same length. We then study the 2-dimensional case (d=2d=2) for axis-parallel unit square objects. We show that both continuous and discrete versions are NP-complete, and design polynomial-time approximation algorithms that produce (16OPT+1)(16\cdot OPT+1)-approximate and (64OPT+1)(64\cdot OPT+1)-approximate solutions respectively, using rounding of suitably defined integer linear programming problems. We show that the identifying code problem for axis-parallel unit square intersection graphs (in d=2d=2) can be solved in the same manner as for the discrete version of the discriminating code problem for unit square objects
    corecore