16,372 research outputs found
Core congestion is inherent in hyperbolic networks
We investigate the impact the negative curvature has on the traffic
congestion in large-scale networks. We prove that every Gromov hyperbolic
network admits a core, thus answering in the positive a conjecture by
Jonckheere, Lou, Bonahon, and Baryshnikov, Internet Mathematics, 7 (2011) which
is based on the experimental observation by Narayan and Saniee, Physical Review
E, 84 (2011) that real-world networks with small hyperbolicity have a core
congestion. Namely, we prove that for every subset of vertices of a
-hyperbolic graph there exists a vertex of such that the
disk of radius centered at intercepts at least
one half of the total flow between all pairs of vertices of , where the flow
between two vertices is carried by geodesic (or quasi-geodesic)
-paths. A set intercepts the flow between two nodes and if
intersect every shortest path between and . Differently from what
was conjectured by Jonckheere et al., we show that is not (and cannot be)
the center of mass of but is a node close to the median of in the
so-called injective hull of . In case of non-uniform traffic between nodes
of (in this case, the unit flow exists only between certain pairs of nodes
of defined by a commodity graph ), we prove a primal-dual result showing
that for any the size of a -multi-core (i.e., the number
of disks of radius ) intercepting all pairs of is upper bounded by
the maximum number of pairwise -apart pairs of
Approximation Algorithms for Polynomial-Expansion and Low-Density Graphs
We study the family of intersection graphs of low density objects in low
dimensional Euclidean space. This family is quite general, and includes planar
graphs. We prove that such graphs have small separators. Next, we present
efficient -approximation algorithms for these graphs, for
Independent Set, Set Cover, and Dominating Set problems, among others. We also
prove corresponding hardness of approximation for some of these optimization
problems, providing a characterization of their intractability in terms of
density
Optimal monetary policy in an estimated DSGE for the euro area
The objective of this paper is to examine the main features of optimal monetary policy within a micro-founded macroeconometric framework. First, using Bayesian techniques, we estimate a medium scale closed economy DSGE for the euro area. Then, we study the properties of the Ramsey allocation through impulse response, variance decomposition and counterfactual analysis. In particular, we show that, controlling for the zero lower bound constraint, does not seem to limit the stabilization properties of optimal monetary policy. the Ramsey allocation reasonably well. Such optimal simple operational rules seem to react specifically to nominal wage inflation. Overall, the Ramsey policy together with its simple rule approximations seem to deliver consistent policy messages and may constitute some useful normative benchmarks within medium to large scale estimated DSGE framework. prove the economic micro-foundation and the econometric identification of the structural disturbances. We also present simple monetary policy rules which can “approximate” and implement However, this normative analysis based on estimated models reinforces the need to improve the economic micro-foundation and the econometric identification of the structural disburbances. JEL Classification: E4, E5Bayesian estimation, DSGE Models, monetary policy, Welfare calculations
Lossy Kernelization
In this paper we propose a new framework for analyzing the performance of
preprocessing algorithms. Our framework builds on the notion of kernelization
from parameterized complexity. However, as opposed to the original notion of
kernelization, our definitions combine well with approximation algorithms and
heuristics. The key new definition is that of a polynomial size
-approximate kernel. Loosely speaking, a polynomial size
-approximate kernel is a polynomial time pre-processing algorithm that
takes as input an instance to a parameterized problem, and outputs
another instance to the same problem, such that . Additionally, for every , a -approximate solution
to the pre-processed instance can be turned in polynomial time into a
-approximate solution to the original instance .
Our main technical contribution are -approximate kernels of
polynomial size for three problems, namely Connected Vertex Cover, Disjoint
Cycle Packing and Disjoint Factors. These problems are known not to admit any
polynomial size kernels unless . Our approximate
kernels simultaneously beat both the lower bounds on the (normal) kernel size,
and the hardness of approximation lower bounds for all three problems. On the
negative side we prove that Longest Path parameterized by the length of the
path and Set Cover parameterized by the universe size do not admit even an
-approximate kernel of polynomial size, for any , unless
. In order to prove this lower bound we need to combine
in a non-trivial way the techniques used for showing kernelization lower bounds
with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and
approximate kernel lower bounds for Set Cover and Hitting Set parameterized
by universe siz
Household liquidity and incremental financing decisions:theory and evidence
In this paper we develop a stochastic model for household liquidity. In the model, the optimal liquidity policy takes the form of a liquidity range. Subsequently, we use the model to calibrate the upper bound of the predicted liquidity range. Equipped with knowledge about the relevant control barriers, we run a series of empirical tests on a panel data set of Dutch households covering the period 1992-2007. The results broadly validate our theoretical predictions that households (i) exhaust most of their short-term liquid assets prior to increasing net debt, and (ii) reduce outstanding net debt at the optimally selected upper liquidity barrier. However, a small minority of households appear to act sub-optimally. Poor and vulnerable households rely too frequently on expensive forms of credit (such as overdrafts) hereby incurring substantial amounts of fees and fixed borrowing costs. Elderly households and people on social benefits tend to accumulate too much liquidity. Finally, some households take on expensive short-term credit while having substantial amounts of low-yielding liquid assets
Why Do People Buy Lottery Products?
This paper examines the lottery sales of 99 countries by type of product in order to analyze the socioeconomic and demographic features that help to explain gambling consumption around the world. With a panel data analysis covering 13 years, this study explains the variation of a country’s per-capita lottery sales in general and by type of game: lotto, numbers, keno, toto, draw and instant. This paper found that the richer countries spend more than the poorer countries and the income elasticity of the demand for lottery products is greater than one. So, we may assert that there is an implicit progressivity tax in games when we consider countries rather than households. Several studies have also revealed an inverse relationship between education and the consumption of lottery products. This paper confirms this hypothesis for lotteries in general, but not for the specific lottery products.Gambling; Lotteries; Religiosity; Education; Culture; Age; Panel Data.
Quantitative goals for monetary policy
We study empirically the macroeconomic effects of an explicit de jure quantitative goal for monetary policy. Quantitative goals take three forms: exchange rates, money growth rates, and inflation targets. We analyze the effects on inflation of both having a quantitative target, and of hitting a declared target; we also consider effects on output volatility. Our empirical work uses an annual data set covering 42 countries between 1960 and 2000, and takes account of other determinants of inflation (such as fiscal policy, the business cycle, and openness to international trade), and the endogeneity of the monetary policy regime. We find that both having and hitting quantitative targets for monetary policy is systematically and robustly associated with lower inflation. The exact form of the monetary target matters somewhat (especially for the sustainability of the monetary regime), but is less important than having some quantitative target. Successfully achieving a quantitative monetary goal is also associated with less volatile output. JEL Classification: E52business cycle, exchange, Growth, inflation, Money, rate, TARGET, transparency
Discriminating Codes in Geometric Setups
We study geometric variations of the discriminating code problem. In the
\emph{discrete version} of the problem, a finite set of points and a finite
set of objects are given in . The objective is to choose a
subset of minimum cardinality such that for each point , the subset covering satisfies , and each pair , , we have . In the \emph{continuous version} of the problem, the solution set
can be chosen freely among a (potentially infinite) class of allowed geometric
objects. In the 1-dimensional case (), the points in are placed on a
horizontal line , and the objects in are finite-length line segments
aligned with (called intervals). We show that the discrete version of this
problem is NP-complete. This is somewhat surprising as the continuous version
is known to be polynomial-time solvable. Still, for the 1-dimensional discrete
version, we design a polynomial-time -approximation algorithm. We also
design a PTAS for both discrete and continuous versions in one dimension, for
the restriction where the intervals are all required to have the same length.
We then study the 2-dimensional case () for axis-parallel unit square
objects. We show that both continuous and discrete versions are NP-complete,
and design polynomial-time approximation algorithms that produce -approximate and -approximate solutions respectively,
using rounding of suitably defined integer linear programming problems. We show
that the identifying code problem for axis-parallel unit square intersection
graphs (in ) can be solved in the same manner as for the discrete version
of the discriminating code problem for unit square objects
- …