5,231 research outputs found

    Fibonacci Binning

    Full text link
    This note argues that when dot-plotting distributions typically found in papers about web and social networks (degree distributions, component-size distributions, etc.), and more generally distributions that have high variability in their tail, an exponentially binned version should always be plotted, too, and suggests Fibonacci binning as a visually appealing, easy-to-use and practical choice

    Broadword Implementation of Parenthesis Queries

    Full text link
    We continue the line of research started in "Broadword Implementation of Rank/Select Queries" proposing broadword (a.k.a. SWAR, "SIMD Within A Register") algorithms for finding matching closed parentheses and the k-th far closed parenthesis. Our algorithms work in time O(log w) on a word of w bits, and contain no branch and no test instruction. On 64-bit (and wider) architectures, these algorithms make it possible to avoid costly tabulations, while providing a very significant speedup with respect to for-loop implementations

    Supremum-Norm Convergence for Step-Asynchronous Successive Overrelaxation on M-matrices

    Full text link
    Step-asynchronous successive overrelaxation updates the values contained in a single vector using the usual Gau\ss-Seidel-like weighted rule, but arbitrarily mixing old and new values, the only constraint being temporal coherence: you cannot use a value before it has been computed. We show that given a nonnegative real matrix AA, a σρ(A)\sigma\geq\rho(A) and a vector w>0\boldsymbol w>0 such that AwσwA\boldsymbol w\leq\sigma\boldsymbol w, every iteration of step-asynchronous successive overrelaxation for the problem (sIA)x=b(sI- A)\boldsymbol x=\boldsymbol b, with s>σs >\sigma, reduces geometrically the w\boldsymbol w-norm of the current error by a factor that we can compute explicitly. Then, we show that given a σ>ρ(A)\sigma>\rho(A) it is in principle always possible to compute such a w\boldsymbol w. This property makes it possible to estimate the supremum norm of the absolute error at each iteration without any additional hypothesis on AA, even when AA is so large that computing the product AxA\boldsymbol x is feasible, but estimating the supremum norm of (sIA)1(sI-A)^{-1} is not

    An experimental exploration of Marsaglia's xorshift generators, scrambled

    Full text link
    Marsaglia proposed recently xorshift generators as a class of very fast, good-quality pseudorandom number generators. Subsequent analysis by Panneton and L'Ecuyer has lowered the expectations raised by Marsaglia's paper, showing several weaknesses of such generators, verified experimentally using the TestU01 suite. Nonetheless, many of the weaknesses of xorshift generators fade away if their result is scrambled by a non-linear operation (as originally suggested by Marsaglia). In this paper we explore the space of possible generators obtained by multiplying the result of a xorshift generator by a suitable constant. We sample generators at 100 equispaced points of their state space and obtain detailed statistics that lead us to choices of parameters that improve on the current ones. We then explore for the first time the space of high-dimensional xorshift generators, following another suggestion in Marsaglia's paper, finding choices of parameters providing periods of length 2102412^{1024} - 1 and 2409612^{4096} - 1. The resulting generators are of extremely high quality, faster than current similar alternatives, and generate long-period sequences passing strong statistical tests using only eight logical operations, one addition and one multiplication by a constant

    Stanford Matrix Considered Harmful

    Get PDF
    This note argues about the validity of web-graph data used in the literature

    Buck-boost dc voltage regulator

    Get PDF
    Circuit provides voltage regulation through a wide range of operating frequencies without intervals of high power dissipation

    On efficiency of mean-variance based portfolio selection in DC pension schemes

    Get PDF
    We consider the portfolio selection problem in the accumulation phase of a defined contribution (DC) pension scheme. We solve the mean-variance portfolio selection problem using the embedding technique pioneered by Zhou and Li (2000) and show that it is equivalent to a target-based optimization problem, consisting in the minimization of a quadratic loss function. We support the use of the target-based approach in DC pension funds for three reasons. Firstly, it transforms the difficult problem of selecting the individual's risk aversion coefficient into the easiest task of choosing an appropriate target. Secondly, it is intuitive, flexible and adaptable to the member's needs and preferences. Thirdly, it produces final portfolios that are efficient in the mean-variance setting. We address the issue of comparison between an efficient portfolio and a portfolio that is optimal according to the more general criterion of maximization of expected utility (EU). The two natural notions of Variance Inefficiency and Mean Inefficiency are introduced, which measure the distance of an optimal inefficient portfolio from an efficient one, focusing on their variance and on their expected value, respectively. As a particular case, we investigate the quite popular classes of CARA and CRRA utility functions. In these cases, we prove the intuitive but not trivial results that the mean-variance inefficiency decreases with the risk aversion of the individual and increases with the time horizon and the Sharpe ratio of the risky asset. Numerical investigations stress the impact of the time horizon on the extent of mean-variance inefficiency of CARA and CRRA utility functions. While at instantaneous level EU-optimality and efficiency coincide (see Merton (1971)), we find that for short durations they do not differ significantly. However, for longer durations - that are typical in pension funds - the extent of inefficiency turns out to be remarkable and should be taken into account by pension fund investment managers seeking appropriate rules for portfolio selection. Indeed, this result is a further element that supports the use of the target-based approach in DC pension schemes.Mean-variance approach; efficient frontier; expected utility maximization; defined contribution pension scheme; portfolio selection; risk aversion; Sharpe ratio

    Mean-variance inefficiency of CRRA and CARA utility functions for portfolio selection in defined contribution pension schemes

    Get PDF
    We consider the portfolio selection problem in the accumulation phase of a defined contribution pension scheme in continuous time, and compare the mean-variance and the expected utility maximization approaches. Using the embedding technique pioneered by Zhou and Li (2000) we first find the efficient frontier of portfolios in the Black-Scholes financial market. Then, using standard stochastic optimal control we find the optimal portfolios derived via expected utility for popular utility functions. As a main result, we prove that the optimal portfolios derived with the CARA and CRRA utility functions are not mean-variance efficient. As a corollary, we prove that this holds also in the standard portfolio selection problem. We provide a natural measure of inefficiency based on the difference between optimal portfolio variance and minimal variance, and we show its dependence on risk aversion, Sharpe ratio of the risky asset, time horizon, initial wealth and contribution rate. Numerical examples illustrate the extent of inefficiency of CARA and CRRA utility functions in defined contribution pension schemes.Mean-variance approach, efficient frontier, expected utility maximization, defined contribution pension scheme, portfolio selection, risk aversion, Sharpe ratio

    Efficient Optimally Lazy Algorithms for Minimal-Interval Semantics

    Full text link
    Minimal-interval semantics associates with each query over a document a set of intervals, called witnesses, that are incomparable with respect to inclusion (i.e., they form an antichain): witnesses define the minimal regions of the document satisfying the query. Minimal-interval semantics makes it easy to define and compute several sophisticated proximity operators, provides snippets for user presentation, and can be used to rank documents. In this paper we provide algorithms for computing conjunction and disjunction that are linear in the number of intervals and logarithmic in the number of operands; for additional operators, such as ordered conjunction and Brouwerian difference, we provide linear algorithms. In all cases, space is linear in the number of operands. More importantly, we define a formal notion of optimal laziness, and either prove it, or prove its impossibility, for each algorithm. We cast our results in a general framework of antichains of intervals on total orders, making our algorithms directly applicable to other domains.Comment: 24 pages, 4 figures. A preliminary (now outdated) version was presented at SPIRE 200
    corecore