2,408 research outputs found

    What is "Pro-Poor"?

    Get PDF
    Assessing whether distributional changes are "pro-poor" has become increasingly widespread in academic and policy circles. Starting from relatively general ethical axioms, this paper proposes simple graphical methods to test whether distributional changes are indeed pro-poor. Pro-poor standards are first defined. An important issue is whether these standards should be absolute or relative. Another issue is whether pro-poor judgements should put relatively more emphasis on the impact of growth upon the poorer of the poor. Having formalized the treatment of these issues, the paper describes various ways for checking whether broad classes of ethical judgements will declare a distributional change to be pro-poor.Poverty, Inequality, Pro-poor growth

    Poverty-Reducing Tax Reforms with Heterogeneous Agents

    Get PDF
    The poverty impact of indirect tax reforms is analyzed using sequential stochastic dominance methods. This allows agents to differ in dimensions that cannot always be precisely captured within the usual money-metric indicators of living standards. Examples of such dimensions include household size and composition, temporal or spatial variation in price indices, and individual needs and "merits".Poverty, Efficiency, Tax Reform, Stochastic Dominance

    Programming and healing temperature effects on the efficiency of confined self-healing polymers

    Get PDF
    Shape memory polymers are smart materials capable of fixing a temporary shape and returning to their initial shape in response to an external stimulus. Since the discovery and acknowledgment of their importance in 1960s, shape memory polymers have been the subject of tremendous and continuous attention. In a previous study conducted on a biomimetic shape memory polymer (SMP), the ability of a self-healing composite to heal, and to repair and restore structural-length scale damage using a close-then-heal (CTH) self-healing mechanism was examined and validated. The present study is purposed with investigation of the effects on healing efficiencies of the variation of temperature during both thermo-mechanical programming and shape recovery under three-dimensional (3-D) confinement. The polymer considered was a polystyrene shape memory polymer with 6% by volume of thermoplastic particle additives (copolyester) dispersed in the matrix. After fabrication, and determination of their glass transition temperature using DSC, the specimens were allowed to go through a strain-controlled programming at a wide range of temperatures (20°C, 45°C, 60°C, 82°C, 100°C and 140°C), at a pre-strain level of 15%. Fracture was imposed using a three-point flexure apparatus, and was followed by shape recovery at multiple temperatures (73°C, 100°C, 122°C and 148°C). The self-healing efficiency was evaluated per flexural strength immediately after programming and following healing. The results and deductions attained were verified using EDS analysis and SEM inspection. It is inferred from the study that the programming temperature only very slightly affects the recovered strength. Programming the specimen above its glass transition temperature provided a marginal gain in strength recovery. Shape recovery (healing) temperature, however, was found to have a significant impact on the self-healing efficiency. A sudden “boost” was noted around the melting temperature of the thermoplastics, with a significant increase in the healing efficiency past the bonding temperature of the copolymer. It was observed that programming above the glass transition temperature of the composite and healing above the melting point of the thermoplastic addives ensured a maximum healing efficiency of up to 63% for the material considered

    Not Just Pointing: Shannon's Information Theory as a General Tool for Performance Evaluation of Input Techniques

    Get PDF
    This article was submitted to the ACM CHI conference in September 2017, and rejected in December 2017. It is currently under revision.Input techniques serving, quite literally, to allow users to send information to the computer, the information theoretic approach seems tailor-made for their quantitative evaluation. Shannon's framework makes it straightforward to measure the performance of any technique as an effective information transmission rate, in bits/s. Apart from pointing, however, evaluators of input techniques have generally ignored Shannon, contenting themselves with less rigorous methods of speed and accuracy measurements borrowed from psychology. We plead for a serious consideration in HCI of Shannon's information theory as a tool for the evaluation of all sorts of input techniques. We start with a primer on Shannon's basic quantities and the theoretical entities of his communication model. We then discuss how the concepts should be applied to the input techniques evaluation problem. Finally we outline two concrete methodologies, one focused on the discrete timing and the other on the continuous time course of information gain by the computer

    A mean field model for the interactions between firms on the markets of their inputs

    Full text link
    We consider an economy made of competing firms which are heterogeneous in their capital and use several inputs for producing goods. Their consumption policy is fixed rationally by maximizing a utility and their capital cannot fall below a given threshold (state constraint). We aim at modeling the interactions between firms on the markets of the different inputs on the long term. The stationary equlibria are described by a system of coupled non-linear differential equations: a Hamilton-Jacobi equation describing the optimal control problem of a single atomistic firm; a continuity equation describing the distribution of the individual state variable (the capital) in the population of firms; the equilibria on the markets of the production factors. We prove the existence of equilibria under suitable assumptions

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    Full text link
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper
    corecore