671,451 research outputs found

    Contracting in the shadow of the law

    Get PDF
    Draft version issued as NBER Working Paper No. 13960, April 2008. Final version available online at http://www3.interscience.wiley.com/Economic models of contract typically assume that courts enforce obligations based on verifiable events (corresponding to the legal rule of specific performance). As a matter of law, this is not the case. This leaves open the question of optimal contract design given the available remedies used by the courts. This article shows that American standard form construction contracts can be viewed as an efficient mechanism for implementing building projects given existing legal rules. It is shown that a central feature of these contracts is the inclusion of governance covenants that shape the scope of authority and regulate the ex post bargaining power of parties. Our model also implies that the legal remedies of mistake, impossibility and the doctrine limiting damages for unforeseen events developed in the case of Hadley v. Baxendale are efficient solutions to the problem of implementing complex exchange

    Contracting in the Shadow of the Law

    Get PDF
    Economic models of contract typically assume that courts enforce obligations based on verifiable events (corresponding to the legal rule of specific performance). As a matter of law, this is not the case. This leaves open the question of optimal contract design given the available remedies used by the courts. This paper shows that American standard form construction contracts can be viewed as an efficient mechanism for implementing building projects given existing legal rules. It is shown that a central feature of these contracts is the inclusion of governance covenants that shape the scope of authority, and regulate the ex post bargaining power of parties. Our model also implies that the legal remedies of mistake, impossibility and the doctrine limiting damages for unforeseen events developed in the case of Hadley vs. Baxendale are efficient solutions to the problem of implementing complex exchange.

    Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    Full text link
    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200 - 500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging

    Ensemble learning via feature selection and multiple transformed subsets: Application to image classification

    Get PDF
    [EN]In the machine learning field, especially in classification tasks, the model's design and construction are very important. Constructing the model via a limited set of features may sometimes bound the classification performance and lead to non-optimal performances that some algorithms can provide. To this end, Ensemble learning methods were proposed in the literature. These methods' main goal is to learn a set of models that provide features or predictions whose joint use could lead to a performance better than that obtained by the single model. In this paper, we propose three variants of a new efficient ensemble learning approach that was able to enhance the classification performance of a linear discriminant embedding method. As a case study we consider the efficient "Inter-class sparsity discriminative least square regression" method. We seek the estimation of an enhanced data representation. Instead of deploying multiple classifiers on top of the transformed features, we target the estimation of multiple extracted feature subsets obtained by multiple learned linear embeddings. These are associated with subsets of ranked original features. Multiple feature subsets were used for estimating the transformations. The derived extracted feature subsets were concatenated to form a single data representation vector that is used in the classification process. Many factors were studied and investigated in this paper including (Parameter combinations, number of models, different training percentages, feature selection methods combinations, etc.). Our proposed approach has been benchmarked on different image datasets of various sizes and types (faces, objects and scenes). The proposed scheme achieved competitive performance on four face image datasets (Extended Yale B, LFW-a, Gorgia and FEI) as well as on the COIL20 object dataset and the Outdoor Scene dataset. We measured the performance of our proposed schemes in comparison to (the single model ICS_DLSR, RDA_GD, RSLDA, PCE, LDE, LDA, SVM as well as the KNN algorithm) The conducted experiments showed that the proposed approach can enhance the classification performance in an efficient manner compared to the single-model based learning and was able to outperform its competing methods

    Safe Semi-Supervised Learning with Sparse Graphs

    Get PDF
    There has been substantial interest from both computer science and statistics in developing methods for graph-based semi-supervised learning. The attraction to the area involves several challenging applications brought forth from academia and industry where little data are available with training responses while lots of data are available overall. Ample evidence has demonstrated the value of several of these methods on real data applications, but it should be kept in mind that they heavily rely on some smoothness assumptions. The general frame- work for graph-based semi-supervised learning is to optimize a smooth function over the nodes of the proximity graph constructed from the feature data which is extremely time consuming as the conventional methods for graph construction in general create a dense graph. Lately the interest has shifted to developing faster and more efficient graph-based techniques on larger data, but it comes with a cost of reduced prediction accuracies and small areas of application. The focus of this research is to generate a graph-based semi-supervised model that attains fast convergence without losing its performance and with a larger applicability. The key feature of the semi-supervised model is that it does not fully rely on the smoothness assumptions and performs adequately on real data. Another model is proposed for the case with availability of multiple views. Empirical analysis with real and simulated data showed the competitive performance of the methods against other machine learning algorithms
    corecore