8,537 research outputs found

    Existence and monotonicity of solutions to moral hazard problems.

    Get PDF
    This paper provides a method to prove existence of solutions to some moral hazard problems with infinite set of outcomes. The argument is based on the concept of nondecreasing rearrangement and on a supermodular version of Hardy–Littlewood’s inequality. The method also provides qualitative properties of solutions. Both the cases of wage contracts and of insurance contracts are studied.Supermodularity; Rearrangements; First-order approach; Moral hazard;

    Optimal Demand for Contingent Claims when Agents have law Invariant Utilities.

    Get PDF
    We consider a class of law invariant utilities which contains the Rank Dependent Expected Utility (RDU) and the cumulative prospect theory (CPT). We show that the computation of demand for a contingent claim when utilities are within that class, although not as simple as in the Expected Utility (EU) case, is still tractable. SpeciïŹc attention is given to the RDU and to the CPT cases. Numerous examples are fully solved.Constrained Optimization; Quantiles; Demand; Law Invariant Utilities;

    Supershear Rayleigh waves at a soft interface

    Get PDF
    We report on the experimental observation of waves at a liquid foam surface propagating faster than the bulk shear waves. The existence of such waves has long been debated, but the recent observation of supershear events in a geophysical context has inspired us to search for their existence in a model viscoelastic system. An optimized fast profilometry technique allowed us to observe on a liquid foam surface the waves triggered by the impact of a projectile. At high impact velocity, we show that the expected subshear Rayleigh waves are accompanied by faster surface waves that can be identified as supershear Rayleigh waves.Comment: 4 pages, 4 figures, 2 supplementary video

    The use of figures in the evaluation or rural development policies: a quest for knowledge Counting, to tell and understand

    Get PDF
    Using figures seems to create rigour, objectivity, knowledge and it facilitates comparisons. Consequently, an evalution without figures is hardly conceivable. Nonetheless, objectivity and precision can be just an impression given the fact that figures are constructions built on a modeled description of reality. The simplification of reality operated through a figure can hide subtle elements regarding the way public policies work. If figures can legitimately be used in evaluation, every kinds of figures and evaluations are not equivalent. Therefore, our main research question is what place for figures in evalution? This contribution relates to research about policy evaluation, seen as a mean to produce knowledge useful for the understanding of policies and their implementation. Based on the analysis of the evaluations of rural development policies conducted by the French ministry of agriculture our goal is to increase practical and theoretical knowledge of those policies through well-designed evalutions.Data, evaluation, methods, rural development policies, Agricultural and Food Policy, R58, Q18, H50,

    Pareto efficiency for the concave order and multivariate comonotonicity

    Full text link
    In this paper, we focus on efficient risk-sharing rules for the concave dominance order. For a univariate risk, it follows from a comonotone dominance principle, due to Landsberger and Meilijson [25], that efficiency is characterized by a comonotonicity condition. The goal of this paper is to generalize the comonotone dominance principle as well as the equivalence between efficiency and comonotonicity to the multi-dimensional case. The multivariate setting is more involved (in particular because there is no immediate extension of the notion of comonotonicity) and we address it using techniques from convex duality and optimal transportation

    Reclaiming the energy of a schedule: models and algorithms

    Get PDF
    We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.Comment: A two-page extended abstract of this work appeared as a short presentation in SPAA'2011, while the long version has been accepted for publication in "Concurrency and Computation: Practice and Experience

    A Framework for Agile Development of Component-Based Applications

    Get PDF
    Agile development processes and component-based software architectures are two software engineering approaches that contribute to enable the rapid building and evolution of applications. Nevertheless, few approaches have proposed a framework to combine agile and component-based development, allowing an application to be tested throughout the entire development cycle. To address this problematic, we have built CALICO, a model-based framework that allows applications to be safely developed in an iterative and incremental manner. The CALICO approach relies on the synchronization of a model view, which specifies the application properties, and a runtime view, which contains the application in its execution context. Tests on the application specifications that require values only known at runtime, are automatically integrated by CALICO into the running application, and the captured needed values are reified at execution time to resume the tests and inform the architect of potential problems. Any modification at the model level that does not introduce new errors is automatically propagated to the running system, allowing the safe evolution of the application. In this paper, we illustrate the CALICO development process with a concrete example and provide information on the current implementation of our framework

    On the use of the IAST method for gas separation studies in porous materials with gate-opening behavior

    Full text link
    Highly flexible nanoporous materials, exhibiting for instance gate opening or breathing behavior, are often presented as candidates for separation processes due to their supposed high adsorption selectivity. But this view, based on "classical" considerations of rigid materials and the use of the Ideal Adsorbed Solution Theory (IAST), does not necessarily hold in the presence of framework deformations. Here, we revisit some results from the published literature and show how proper inclusion of framework flexibility in the osmotic thermodynamic ensemble drastically changes the conclusions, in contrast to what intuition and standard IAST would yield. In all cases, the IAST method does not reproduce the gate-opening behavior in the adsorption of mixtures, and may overestimates the selectivity by up to two orders of magnitude

    Co-Scheduling Algorithms for High-Throughput Workload Execution

    Get PDF
    This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives
    • 

    corecore