2,635 research outputs found

    Relative wage movements and the distribution of consumption

    Get PDF
    The authors analyze how relative wage movements among birth cohorts and education groups affected the distribution of household consumption and economic welfare. Their empirical work draws on the best available cross-sectional data sets to construct synthetic panel data on U.S. consumption, labor supply, and wages during the 1980s. The authors find that low-frequency movements in the cohort-education structure of pretax hourly wages among men drove large changes in the distribution of household consumption. The results constitute a spectacular failure of between-group consumption insurance, a failure not explained by existing theories of informationally constrained optimal consumption behavior

    Distributional outcomes of a decentralized welfare program

    Get PDF
    It is common for central governments, to delegate authority over the targeting of welfare programs to local community organizations - which may be better informed about who is poor, though possibly less accountable for getting the money to the local poor - while the center retains control over how much goes to each local region. The authors outline a theoretical model of the interconnected behavior of the various actors in such a setting. The model's information structure provides scope for econometric identification. Applying data for a specific program in Bangladesh, they find that overall targeting was mildly pro-poor, mostly because of successful targeting within villages. But this varied across villages. Although some village characteristics promoted better targeting, these were generally not the same characteristics that attracted resources from the center. The authors observe that the center's desire for broad geographic coverage, appears to have severely constrained the scope for pro-poor village targeting. However, poor villages tended not to be better at reaching their poor. They find some evidence that local institutions matter. The presence of cooperatives for farmers and the landless, appears to be associated with more pro-poor program targeting. The presence of recreational clubs has the opposite effect. Sometimes the benefits of decentralized social programs are captured by local elites, depending on the type of spending being decentralized. When public spending us on private (excludable) good, and there is no self-targeting mechanism to ensure that only the poor participate, there is ample scope for local mis-targeting.Services&Transfers to Poor,Poverty Impact Evaluation,Poverty Reduction Strategies,Environmental Economics&Policies,Poverty Monitoring&Analysis,Safety Nets and Transfers,Rural Poverty Reduction,Services&Transfers to Poor,Governance Indicators,Poverty Impact Evaluation

    Performance modelling of opportunistic forwarding with exact knowledge

    Get PDF
    The Delay Tolerant Networking paradigm aims to enable communications in disconnected environments where traditional protocols would fail. Opportunistic networks are delay tolerant networks whose nodes are typically the users\u27 personal mobile devices. Communications in an opportunistic network rely on the mobility of users: each message is forwarded from node to node, according to a hop-by-hop decision process that selects the node that is better suited for bringing the message closer to its destination. Despite the variety of forwarding protocols that have been proposed in the recent years, there is no reference framework for the performance modelling of opportunistic forwarding. In this paper we start to fill this gap by proposing an analytical model for the expected delay and the expected number of hops experienced by messages when delivered in an opportunistic fashion. This model seamlessly integrates both social-aware and social-oblivious single-copy forwarding protocols, as well as different hypotheses for user contact dynamics. The proposed framework is used to derive bounds on the expected delay under homogeneous and heterogeneous contact patterns. We found that, in heterogeneous settings, finite expected delay can be guaranteed not only when nodes\u27 inter-meeting times follow an exponential or power law with exponential cut-off distribution, but also when they are power law distributed, as long as weaker conditions than those derived by Chaintreau et al. [1] for the homogeneous scenario are satisfied

    Performance modelling of opportunistic forwarding under heterogenous mobility

    Get PDF
    The Delay Tolerant Networking paradigm aims to enable communications in disconnected environments where traditional protocols would fail. Oppor- tunistic networks are delay tolerant networks whose nodes are typically the users\u27 personal mobile devices. Communications in an opportunistic network rely on the mobility of users: each message is forwarded from node to node, according to a hop-by-hop decision process that selects the node that is better suited for bringing the message closer to its destination. Despite the variety of forwarding protocols that have been proposed in the recent years, there is no reference framework for the performance modelling of opportunistic for- warding. In this paper we start to ll this gap by proposing an analytical model for the rst two moments of the delay and the number of hops expe- rienced by messages when delivered in an opportunistic fashion. This model seamlessly integrates both social-aware and social-oblivious single-copy for- warding protocols, as well as dierent hypotheses for user contact dynamics. More specically, the model can be solved exactly in the case of exponential and Pareto inter-meeting times, two popular cases emerged from the liter- ature on human mobility analysis. In order to exemplify how the proposed framework can be used, we discuss its application to two case studies with dierent mobility settings. Finally, we discuss how the framework can be also solved exactly when inter-meeting times follow a hyper-exponential distribu- tion. This case is particularly relevant as hyper-exponential distributions are able to approximate the large class of high-variance distributions (distribu- tions with coecient of variation greater than one), which are those more challenging, e.g., from the delay standpoint

    The Extreme Risk of Personal Data Breaches & The Erosion of Privacy

    Full text link
    Personal data breaches from organisations, enabling mass identity fraud, constitute an \emph{extreme risk}. This risk worsens daily as an ever-growing amount of personal data are stored by organisations and on-line, and the attack surface surrounding this data becomes larger and harder to secure. Further, breached information is distributed and accumulates in the hands of cyber criminals, thus driving a cumulative erosion of privacy. Statistical modeling of breach data from 2000 through 2015 provides insights into this risk: A current maximum breach size of about 200 million is detected, and is expected to grow by fifty percent over the next five years. The breach sizes are found to be well modeled by an \emph{extremely heavy tailed} truncated Pareto distribution, with tail exponent parameter decreasing linearly from 0.57 in 2007 to 0.37 in 2015. With this current model, given a breach contains above fifty thousand items, there is a ten percent probability of exceeding ten million. A size effect is unearthed where both the frequency and severity of breaches scale with organisation size like s0.6s^{0.6}. Projections indicate that the total amount of breached information is expected to double from two to four billion items within the next five years, eclipsing the population of users of the Internet. This massive and uncontrolled dissemination of personal identities raises fundamental concerns about privacy.Comment: 16 pages, 3 sets of figures, and 4 table

    Residual Inter-Contact Time for Opportunistic Networks with Pareto Inter-Contact Time: Two Nodes Case

    Get PDF
    PDPTA'15 : The 21st International Conference on Parallel and Distributed Processing Techniques and Applications , Jul 27-30, 2015 , Las Vegas, NV, USAOpportunistic networks (OppNets) are appealing for many applications, such as wild life monitoring, disaster relief and mobile data offloading. In such a network, a message arriving at a mobile node could be transmitted to another mobile node when they opportunistically move into each other's transmission range (called in contact), and after multi-hop similar transmissions the message will finally reach its destination. Therefore, for one message the time interval from its arrival at a mobile node to the time the mobile node contacts another node constitutes an essential part of the message's whole delay. Thus, studying stochastic properties of this time interval between two nodes lays a solid foundation for evaluating the whole message delay in OppNets. Note that this time interval is within the time interval between two consecutive node contacts (called inter-contact time) and it is also referred to as residual inter-contact time. In this paper, we derive the closed-form distribution for residual inter-contact time. First, we formulate the contact process of a pair of mobile nodes as a renewal process, where the inter-contact time features the popular Pareto distribution. Then, we derive, based on the renewal theory, closed-form results for the transient distribution of residual inter-contact time and also its limiting distribution. Our theoretical results on distribution of residual inter-contact time are validated by simulations

    The modelling of operational risk: experience with the analysis of the data collected by the Basel Committee

    Get PDF
    The revised Basel Capital Accord requires banks to meet a capital requirement for operational risk as part of an overall risk-based capital framework. Three distinct options for calculating operational risk charges are proposed (Basic Approach, Standardised Approach, Advanced Measurement Approaches), reflecting increasing levels of risk sensitivity. Since 2001, the Risk Management Group of the Basel Committee has been performing specific surveys of banksÂ’ operational loss data, with the main purpose of obtaining information on the industryÂ’s operational risk experience, to be used for the refinement of the capital framework and for the calibration of the regulatory coefficients. The second loss data collection was launched in the summer of 2002: the 89 banks participating in the exercise provided the Group with more than 47,000 observations, grouped by eight standardised Business Lines and seven Event Types. A summary of the data collected, which focuses on the description of the range of individualgross loss amounts and of the distribution of the banksÂ’ losses across the business lines/event types, was returned to the industry in March 2003. The objective of this paper is to move forward with respect to that document, by illustrating the methodologies and the outcomes of the inferential analysis carried out on the data collected through 2002. To this end, after pooling the individual banksÂ’ losses according to a Business Line criterion, the operational riskiness of each Business Line data set is explored using empirical and statistical tools. The work aims, first of all, to compare the sensitivity of conventional actuarial distributions and models stemming from the Extreme Value Theory in representing the highest percentiles of the data sets: the exercise shows that the extreme value model, in its Peaks Over Threshold representation, explains the behaviour of the operational risk data in the tail area well. Then, measures of severity and frequency of the large losses are gained and, by a proper combination of these estimates, a bottom-up operational risk capital figure is computed for each Business Line. Finally, for each Business Line and in the eight Business Lines as a whole, the contributions of the expected losses to the capital figures are evaluated and the relationships between the capital charges and the corresponding average level of Gross Incomes are determined and compared with the current coefficients envisaged in the simplified approaches of the regulatory framework.operational risk, heavy tails, conventional inference, Extreme Value Theory, Peaks Over Threshold, median shortfall, Point Process of exceedances, capital charge, Business Line, Gross Income, regulatory coefficients
    • 

    corecore