161 research outputs found

    Augmenting Definitive Screening Designs

    Get PDF
    Design of experiments is used to study the relationship between one or more response variables and several factors whose levels are varied. Response surface methodology (RSM) employs the design of experiment techniques to decide if changes in design variables can enhance or optimize a process. They are usually analyzed by fitting a second-order polynomial model. Some standard and classical response surface designs are 3k3^k Factorial Designs, Central Composite Designs (CCDs), and Box-Behnken Designs (BBDs). They can all be used to fit a second-order polynomial model efficiently and allow for some testing of the model\u27s lack of fit. When performing multiple experiments is not feasible due to time, budget, or other constraints, recent literature suggests using a single experimental design capable of performing both factor screening and surface response exploration. Definitive Screening Designs (DSDs) are well-known experimental designs with three levels. They are also named second-order screening designs, and they can estimate a second-order model in any subsets of three factors. However, when the design has more than three active factors, only the linear main effects and perhaps the largest second-order term can be identified by a DSD. Also, they may have trouble identifying active pure quadratic effects when two-factor interactions are present. In this dissertation, We propose several methods for augmenting definitive screening designs for improving estimability and efficiency. Improved sensitivity and specificity are also highlighted

    Numerical convergence of pre-initial conditions on dark matter halo properties

    Get PDF
    Generating pre-initial conditions (or particle loads) is the very first step to set up a cosmological N-body simulation. In this work, we revisit the numerical convergence of pre-initial conditions on dark matter halo properties using a set of simulations which only differs in initial particle loads, i.e. grid, glass, and the newly introduced capacity constrained Voronoi tessellation (CCVT). We find that the median halo properties agree fairly well (i.e. within a convergence level of a few per cent) among simulations running from different initial loads. We also notice that for some individual haloes cross-matched among different simulations, the relative difference of their properties sometimes can be several tens of per cent. By looking at the evolution history of these poorly converged haloes, we find that they are usually merging haloes or haloes have experienced recent merger events, and their merging processes in different simulations are out-of-sync, making the convergence of halo properties become poor temporarily. We show that, comparing to the simulation starting with an anisotropic grid load, the simulation with an isotropic CCVT load converges slightly better to the simulation with a glass load, which is also isotropic. Among simulations with different pre-initial conditions, haloes in higher density environments tend to have their properties converged slightly better. Our results confirm that CCVT loads behave as well as the widely used grid and glass loads at small scales, and for the first time we quantify the convergence of two independent isotropic particle loads (i.e. glass and CCVT) on halo properties.Peer reviewe

    Semantic Equivariant Mixup

    Full text link
    Mixup is a well-established data augmentation technique, which can extend the training distribution and regularize the neural networks by creating ''mixed'' samples based on the label-equivariance assumption, i.e., a proportional mixup of the input data results in the corresponding labels being mixed in the same proportion. However, previous mixup variants may fail to exploit the label-independent information in mixed samples during training, which usually contains richer semantic information. To further release the power of mixup, we first improve the previous label-equivariance assumption by the semantic-equivariance assumption, which states that the proportional mixup of the input data should lead to the corresponding representation being mixed in the same proportion. Then a generic mixup regularization at the representation level is proposed, which can further regularize the model with the semantic information in mixed samples. At a high level, the proposed semantic equivariant mixup (sem) encourages the structure of the input data to be preserved in the representation space, i.e., the change of input will result in the obtained representation information changing in the same way. Different from previous mixup variants, which tend to over-focus on the label-related information, the proposed method aims to preserve richer semantic information in the input with semantic-equivariance assumption, thereby improving the robustness of the model against distribution shifts. We conduct extensive empirical studies and qualitative analyzes to demonstrate the effectiveness of our proposed method. The code of the manuscript is in the supplement.Comment: Under revie
    • …
    corecore