3,807 research outputs found

    Probabilistic performance estimators for computational chemistry methods: Systematic Improvement Probability and Ranking Probability Matrix. I. Theory

    Full text link
    The comparison of benchmark error sets is an essential tool for the evaluation of theories in computational chemistry. The standard ranking of methods by their Mean Unsigned Error is unsatisfactory for several reasons linked to the non-normality of the error distributions and the presence of underlying trends. Complementary statistics have recently been proposed to palliate such deficiencies, such as quantiles of the absolute errors distribution or the mean prediction uncertainty. We introduce here a new score, the systematic improvement probability (SIP), based on the direct system-wise comparison of absolute errors. Independently of the chosen scoring rule, the uncertainty of the statistics due to the incompleteness of the benchmark data sets is also generally overlooked. However, this uncertainty is essential to appreciate the robustness of rankings. In the present article, we develop two indicators based on robust statistics to address this problem: P_{inv}, the inversion probability between two values of a statistic, and \mathbf{P}_{r}, the ranking probability matrix. We demonstrate also the essential contribution of the correlations between error sets in these scores comparisons

    Probabilistic performance estimators for computational chemistry methods: the empirical cumulative distribution function of absolute errors

    Full text link
    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely (1) the probability for a new calculation to have an absolute error below a chosen threshold, and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.Comment: Supplementary material: https://github.com/ppernot/ECDF

    Methodology for automatic recovering of 3D partitions from unstitched faces of non-manifold CAD models

    Get PDF
    Data exchanges between different software are currently used in industry to speed up the preparation of digital prototypes for Finite Element Analysis (FEA). Unfortunately, due to data loss, the yield of the transfer of manifold models rarely reaches 1. In the case of non-manifold models, the transfer results are even less satisfactory. This is particularly true for partitioned 3D models: during the data transfer based on the well-known exchange formats, all 3D partitions are generally lost. Partitions are mainly used for preparing mesh models required for advanced FEA: mapped meshing, material separation, definition of specific boundary conditions, etc. This paper sets up a methodology to automatically recover 3D partitions from exported non-manifold CAD models in order to increase the yield of the data exchange. Our fully automatic approach is based on three steps. First, starting from a set of potentially disconnected faces, the CAD model is stitched. Then, the shells used to create the 3D partitions are recovered using an iterative propagation strategy which starts from the so-called manifold vertices. Finally, using the identified closed shells, the 3D partitions can be reconstructed. The proposed methodology has been validated on academic as well as industrial examples.This work has been carried out under a research contract between the Research and Development Direction of the EDF Group and the Arts et Métiers ParisTech Aix-en-Provence

    The impact of inter-organizational management control systems on performance: a longitudinal case study of a supplier relation in automotive.

    Get PDF
    This study investigates whether appropriate management control design of supplier relations is associated with better performance. Although management control systems (MCSs) are found to be contingent on situational characteristics, it remains unclear whether this contingency fit contributes to performance. In order to illustrate the existence and refine the dynamics of the fit-performance association, we perform a longitudinal case study of an exemplary automotive manufacturer-supplier relation that was subject to considerable change and severe performance difficulties in the course of time. As proposed, case findings show that if the supplier is incapable of dealing with changed contingencies, a MCS contingency misfit is associated with poor operational performance. However, this misfit is only temporal, as the manufacturer adapts the MCS to fit the changed supplier relation and regain operational performance. In addition, the longitudinal study suggests that trust and basic formal control (control continuously exercised under all circumstances) are complements, while trust substitutes for extra formal control (control set up on top of basic formal control). Finally, the data indicate a timing difference in the substitutive relation: the building up of extra formal control proceeds gradually, while the lowering happens almost immediately.management control; trust; performance; supplier relationships; manufacturing; contingency theory; case research; automotive;

    Management control of supplier relationships in manufacturing: a case study in the automotive industry.

    Get PDF
    This paper studies management control design of supplier relationships in manufacturing, a supply chain phase currently under-explored. Compared to supplier relations during procurement and R&D, which research found to be governed by a combination of formal and informal controls, supplier relations in manufacturing are more formal, so that they could be governed by more formal and less informal controls. To refine the management control system and influencing contingencies, we propose a theoretical framework specifically adapted for the manufacturing stage. This framework is investigated by an in depth case study of the supplier management control of a Volvo Cars production facility. We identify three types of suppliers visualizing the associations in the framework and illustrating the framework’s explicative power in (automotive) manufacturing. Furthermore, the case contradicts that supplier relations in the manufacturing phase are governed by little informal control, because the automaker highly values the role of trust building and social pressure. Most notably, a structured supplier team functions as a clan and establishes informal control among participating suppliers, which strengthens the automaker’s control on dyadic supplier relations.management control; supplier relationships; manufacturing; contingency theory; case research;

    Repairing triangle meshes built from scanned point cloud

    Get PDF
    The Reverse Engineering process consists of a succession of operations that aim at creating a digital representation of a physical model. The reconstructed geometric model is often a triangle mesh built from a point cloud acquired with a scanner. Depending on both the object complexity and the scanning process, some areas of the object outer surface may never be accessible, thus inducing some deficiencies in the point cloud and, as a consequence, some holes in the resulting mesh. This is simply not acceptable in an integrated design process where the geometric models are often shared between the various applications (e.g. design, simulation, manufacturing). In this paper, we propose a complete toolbox to fill in these undesirable holes. The hole contour is first cleaned to remove badly-shaped triangles that are due to the scanner noise. A topological grid is then inserted and deformed to satisfy blending conditions with the surrounding mesh. In our approach, the shape of the inserted mesh results from the minimization of a quadratic function based on a linear mechanical model that is used to approximate the curvature variation between the inner and surrounding meshes. Additional geometric constraints can also be specified to further shape the inserted mesh. The proposed approach is illustrated with some examples coming from our prototype software
    corecore