1,008 research outputs found

    The management of de-cumulation risks in a defined contribution environment

    Get PDF
    The aim of the paper is to lay the theoretical foundations for the construction of a flexible tool that can be used by pensioners to find optimal investment and consumption choices in the distribution phase of a defined contribution pension scheme. The investment/consumption plan is adopted until the time of compulsory annuitization, taking into account the possibility of earlier death. The effect of the bequest motive and the desire to buy a higher annuity than the one purchasable at retirement are included in the objective function. The mathematical tools provided by dynamic programming techniques are applied to find closed form solutions: numer-ical examples are also presented. In the model, the trade-off between the different desires of the individual regarding consumption and final annuity can be dealt with by choosing appropriate weights for these factors in the setting of the problem. Conclusions are twofold. Firstly, we find that there is a natural time-varying target for the size of the fund, which acts as a sort of safety level for the needs of the pensioner. Secondly, the personal preferences of the pensioner can be translated into optimal choices, which in turn affect the distribution of the consumption path and of the final annuity

    Classification of cryptocurrency coins and tokens by the dynamics of their market capitalisations

    Full text link
    We empirically verify that the market capitalisations of coins and tokens in the cryptocurrency universe follow power-law distributions with significantly different values, with the tail exponent falling between 0.5 and 0.7 for coins, and between 1.0 and 1.3 for tokens. We provide a rationale for this, based on a simple proportional growth with birth & death model previously employed to describe the size distribution of firms, cities, webpages, etc. We empirically validate the model and its main predictions, in terms of proportional growth (Gibrat's law) of the coins and tokens. Estimating the main parameters of the model, the theoretical predictions for the power-law exponents of coin and token distributions are in remarkable agreement with the empirical estimations, given the simplicity of the model. Our results clearly characterize coins as being "entrenched incumbents" and tokens as an "explosive immature ecosystem", largely due to massive and exuberant Initial Coin Offering activity in the token space. The theory predicts that the exponent for tokens should converge to 1 in the future, reflecting a more reasonable rate of new entrants associated with genuine technological innovations

    From Theory to Practice: Plug and Play with Succinct Data Structures

    Full text link
    Engineering efficient implementations of compact and succinct structures is a time-consuming and challenging task, since there is no standard library of easy-to- use, highly optimized, and composable components. One consequence is that measuring the practical impact of new theoretical proposals is a difficult task, since older base- line implementations may not rely on the same basic components, and reimplementing from scratch can be very time-consuming. In this paper we present a framework for experimentation with succinct data structures, providing a large set of configurable components, together with tests, benchmarks, and tools to analyze resource requirements. We demonstrate the functionality of the framework by recomposing succinct solutions for document retrieval.Comment: 10 pages, 4 figures, 3 table

    STROOPWAFEL: Simulating rare outcomes from astrophysical populations, with application to gravitational-wave sources

    Get PDF
    Gravitational-wave observations of double compact object (DCO) mergers are providing new insights into the physics of massive stars and the evolution of binary systems. Making the most of expected near-future observations for understanding stellar physics will rely on comparisons with binary population synthesis models. However, the vast majority of simulated binaries never produce DCOs, which makes calculating such populations computationally inefficient. We present an importance sampling algorithm, STROOPWAFEL, that improves the computational efficiency of population studies of rare events, by focusing the simulation around regions of the initial parameter space found to produce outputs of interest. We implement the algorithm in the binary population synthesis code COMPAS, and compare the efficiency of our implementation to the standard method of Monte Carlo sampling from the birth probability distributions. STROOPWAFEL finds ∼\sim25-200 times more DCO mergers than the standard sampling method with the same simulation size, and so speeds up simulations by up to two orders of magnitude. Finding more DCO mergers automatically maps the parameter space with far higher resolution than when using the traditional sampling. This increase in efficiency also leads to a decrease of a factor ∼\sim3-10 in statistical sampling uncertainty for the predictions from the simulations. This is particularly notable for the distribution functions of observable quantities such as the black hole and neutron star chirp mass distribution, including in the tails of the distribution functions where predictions using standard sampling can be dominated by sampling noise.Comment: Accepted. Data and scripts to reproduce main results is publicly available. The code for the STROOPWAFEL algorithm will be made publicly available. Early inquiries can be addressed to the lead autho

    CrisMap: A Big Data Crisis Mapping System Based on Damage Detection and Geoparsing

    Get PDF
    Natural disasters, as well as human-made disasters, can have a deep impact on wide geographic areas, and emergency responders can benefit from the early estimation of emergency consequences. This work presents CrisMap, a Big Data crisis mapping system capable of quickly collecting and analyzing social media data. CrisMap extracts potential crisis- related actionable information from tweets by adopting a classification technique based on word embeddings and by exploiting a combination of readily-available semantic annotators to geoparse tweets. The enriched tweets are then visualized in customizable, Web-based dashboards, also leveraging ad-hoc quantitative visualizations like choropleth maps. The maps produced by our system help to estimate the impact of the emergency in its early phases, to identify areas that have been severely struck, and to acquire a greater situational awareness. We extensively benchmark the performance of our system on two Italian natural disasters by validating our maps against authoritative data. Finally, we perform a qualitative case-study on a recent devastating earthquake occurred in Central Italy
    • …
    corecore