5,709 research outputs found

    Optimal Policy Towards Families with Different Amounts of Social Capital, in the Presence of Asymmetric Information and Stochastic Fertility

    Get PDF
    We examine the effects of differences in social capital on first and second best transfers to families with children, in an asymmetric information context where the number of births, and the future earning capacity of each child that is born, are random variables. The probability that a couple has children is conditional on the level of reproductive activity undertaken. The probability that a child will have high earning ability is positively conditioned not only by the level of educational investment undertaken by the child’s parents, but also by the social capital of the latter. The optimal policy includes two transfers, one conditional on number of births, the other on the children’s earning ability.education, stochastic fertility, child benefits, pensions, scholarships, social capital, asymmetric information, multi-agency

    Computational support for early stage architectural design

    Get PDF
    The concepts underlying 'scenario-based' design are introduced. From the analysis of a number of struc-tured interviews with practicing designers, key design scenarios are identified. These scenarios are then generalised and outline guidelines developed for structuring early stage design

    Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork

    Full text link
    AI practitioners typically strive to develop the most accurate systems, making an implicit assumption that the AI system will function autonomously. However, in practice, AI systems often are used to provide advice to people in domains ranging from criminal justice and finance to healthcare. In such AI-advised decision making, humans and machines form a team, where the human is responsible for making final decisions. But is the most accurate AI the best teammate? We argue "No" -- predictable performance may be worth a slight sacrifice in AI accuracy. Instead, we argue that AI systems should be trained in a human-centered manner, directly optimized for team performance. We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves. To optimize the team performance for this setting we maximize the team's expected utility, expressed in terms of the quality of the final decision, cost of verifying, and individual accuracies of people and machines. Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance and show the benefit of modeling teamwork during training through improvements in expected team utility across datasets, considering parameters such as human skill and the cost of mistakes. We discuss the shortcoming of current optimization approaches beyond well-studied loss functions such as log-loss, and encourage future work on AI optimization problems motivated by human-AI collaboration.Comment: v
    corecore