449 research outputs found

    Increasing National Pension Premium Defaulters and Dropouts in Japan

    Get PDF
    This paper investigates why so many people are premium payment defaulters or dropouts from the national pension system using household-level data from a Japanese Government Survey. The major results can be summarized as follows: (1) the dropout probability of younger cohorts does not differ significantly from that of older cohorts; (2) the unemployed or jobless, individuals with few financial assets, and people who do not own their homes, i.e., borrowing-constrained individuals, are more likely to drop out from the national pension; and, (3) the probability of dropping out from the national pension system declines abruptly at around the age of 36.Intergenerational inequality, Liquidity constraint, National pension

    Increasing National Pension Premium Defaulters and Dropouts in Japan

    Get PDF
    This paper investigates why so many people are premium payment defaulters or dropouts from the national pension system using household-level data from a Japanese Government Survey. The major results can be summarized as follows: (1) the dropout probability of younger cohorts does not differ significantly from that of older cohorts; (2) the unemployed or jobless, individuals with few financial assets, and people who do not own their homes, i.e., borrowing-constrained individuals, are more likely to drop out from the national pension; and, (3) the probability of dropping out from the national pension system declines abruptly at around the age of 36

    V-Cache: Towards Flexible Resource Provisioning for Multi-tier Applications in IaaS Clouds

    Full text link
    Abstract—Although the resource elasticity offered by Infrastructure-as-a-Service (IaaS) clouds opens up opportunities for elastic application performance, it also poses challenges to application management. Cluster applications, such as multi-tier websites, further complicates the management requiring not only accurate capacity planning but also proper partitioning of the resources into a number of virtual machines. Instead of burdening cloud users with complex management, we move the task of determining the optimal resource configuration for cluster applications to cloud providers. We find that a structural reorganization of multi-tier websites, by adding a caching tier which runs on resources debited from the original resource budget, significantly boosts application performance and reduces resource usage. We propose V-Cache, a machine learning based approach to flexible provisioning of resources for multi-tier applications in clouds. V-Cache transparently places a caching proxy in front of the application. It uses a genetic algorithm to identify the incoming requests that benefit most from caching and dynamically resizes the cache space to accommodate these requests. We develop a reinforcement learning algorithm to optimally allocate the remaining capacity to other tiers. We have implemented V-Cache on a VMware-based cloud testbed. Exper-iment results with the RUBiS and WikiBench benchmarks show that V-Cache outperforms a representative capacity management scheme and a cloud-cache based resource provisioning approach by at least 15 % in performance, and achieves at least 11 % and 21 % savings on CPU and memory resources, respectively. I

    Training Uncertainty-Aware Classifiers with Conformalized Deep Learning

    Full text link
    Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We begin to address this problem in the context of multi-class classification by developing a novel training algorithm producing models with more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal inference, that quantifies model uncertainty by carefully leveraging hold-out data. Experiments with synthetic and real data demonstrate this method can lead to smaller conformal prediction sets with higher conditional coverage, after exact calibration with hold-out data, compared to state-of-the-art alternatives.Comment: 46 pages, 48 figures, 5 table

    Quantifying the Performance Benefits of Partitioned Communication in MPI

    Full text link
    Partitioned communication was introduced in MPI 4.0 as a user-friendly interface to support pipelined communication patterns, particularly common in the context of MPI+threads. It provides the user with the ability to divide a global buffer into smaller independent chunks, called partitions, which can then be communicated independently. In this work we first model the performance gain that can be expected when using partitioned communication. Next, we describe the improvements we made to \mpich{} to enable those gains and provide a high-quality implementation of MPI partitioned communication. We then evaluate partitioned communication in various common use cases and assess the performance in comparison with other MPI point-to-point and one-sided approaches. Specifically, we first investigate two scenarios commonly encountered for small partition sizes in a multithreaded environment: thread contention and overhead of using many partitions. We propose two solutions to alleviate the measured penalty and demonstrate their use. We then focus on large messages and the gain obtained when exploiting the delay resulting from computations or load imbalance. We conclude with our perspectives on the benefits of partitioned communication and the various results obtained
    corecore