171 research outputs found

    Augmenting Definitive Screening Designs

    Get PDF
    Design of experiments is used to study the relationship between one or more response variables and several factors whose levels are varied. Response surface methodology (RSM) employs the design of experiment techniques to decide if changes in design variables can enhance or optimize a process. They are usually analyzed by fitting a second-order polynomial model. Some standard and classical response surface designs are 3k3^k Factorial Designs, Central Composite Designs (CCDs), and Box-Behnken Designs (BBDs). They can all be used to fit a second-order polynomial model efficiently and allow for some testing of the model\u27s lack of fit. When performing multiple experiments is not feasible due to time, budget, or other constraints, recent literature suggests using a single experimental design capable of performing both factor screening and surface response exploration. Definitive Screening Designs (DSDs) are well-known experimental designs with three levels. They are also named second-order screening designs, and they can estimate a second-order model in any subsets of three factors. However, when the design has more than three active factors, only the linear main effects and perhaps the largest second-order term can be identified by a DSD. Also, they may have trouble identifying active pure quadratic effects when two-factor interactions are present. In this dissertation, We propose several methods for augmenting definitive screening designs for improving estimability and efficiency. Improved sensitivity and specificity are also highlighted

    Combining Cloud and Mobile Computing for Machine Learning

    Full text link
    Although the computing power of mobile devices is increasing, machine learning models are also growing in size. This trend creates problems for mobile devices due to limitations like their memory capacity and battery life. While many services, like ChatGPT and Midjourney, run all the inferences in the cloud, we believe a flexible and fine-grained task distribution is more desirable. In this work, we consider model segmentation as a solution to improving the user experience, dividing the computation between mobile devices and the cloud in a way that offloads the compute-heavy portion of the model while minimizing the data transfer required. We show that the division not only reduces the wait time for users but can also be fine-tuned to optimize the workloads of the cloud. To achieve that, we design a scheduler that collects information about network quality, client device capability, and job requirements, making decisions to achieve consistent performance across a range of devices while reducing the work the cloud needs to perform.Comment: Ruiqi Xu and Tianchi Zhang contributed equally to this wor

    Numerical convergence of pre-initial conditions on dark matter halo properties

    Get PDF
    Generating pre-initial conditions (or particle loads) is the very first step to set up a cosmological N-body simulation. In this work, we revisit the numerical convergence of pre-initial conditions on dark matter halo properties using a set of simulations which only differs in initial particle loads, i.e. grid, glass, and the newly introduced capacity constrained Voronoi tessellation (CCVT). We find that the median halo properties agree fairly well (i.e. within a convergence level of a few per cent) among simulations running from different initial loads. We also notice that for some individual haloes cross-matched among different simulations, the relative difference of their properties sometimes can be several tens of per cent. By looking at the evolution history of these poorly converged haloes, we find that they are usually merging haloes or haloes have experienced recent merger events, and their merging processes in different simulations are out-of-sync, making the convergence of halo properties become poor temporarily. We show that, comparing to the simulation starting with an anisotropic grid load, the simulation with an isotropic CCVT load converges slightly better to the simulation with a glass load, which is also isotropic. Among simulations with different pre-initial conditions, haloes in higher density environments tend to have their properties converged slightly better. Our results confirm that CCVT loads behave as well as the widely used grid and glass loads at small scales, and for the first time we quantify the convergence of two independent isotropic particle loads (i.e. glass and CCVT) on halo properties.Peer reviewe

    Semantic Equivariant Mixup

    Full text link
    Mixup is a well-established data augmentation technique, which can extend the training distribution and regularize the neural networks by creating ''mixed'' samples based on the label-equivariance assumption, i.e., a proportional mixup of the input data results in the corresponding labels being mixed in the same proportion. However, previous mixup variants may fail to exploit the label-independent information in mixed samples during training, which usually contains richer semantic information. To further release the power of mixup, we first improve the previous label-equivariance assumption by the semantic-equivariance assumption, which states that the proportional mixup of the input data should lead to the corresponding representation being mixed in the same proportion. Then a generic mixup regularization at the representation level is proposed, which can further regularize the model with the semantic information in mixed samples. At a high level, the proposed semantic equivariant mixup (sem) encourages the structure of the input data to be preserved in the representation space, i.e., the change of input will result in the obtained representation information changing in the same way. Different from previous mixup variants, which tend to over-focus on the label-related information, the proposed method aims to preserve richer semantic information in the input with semantic-equivariance assumption, thereby improving the robustness of the model against distribution shifts. We conduct extensive empirical studies and qualitative analyzes to demonstrate the effectiveness of our proposed method. The code of the manuscript is in the supplement.Comment: Under revie

    FP8-BERT: Post-Training Quantization for Transformer

    Full text link
    Transformer-based models, such as BERT, have been widely applied in a wide range of natural language processing tasks. However, one inevitable side effect is that they require massive memory storage and inference cost when deployed in production. Quantization is one of the popularized ways to alleviate the cost. However, the previous 8-bit quantization strategy based on INT8 data format either suffers from the degradation of accuracy in a Post-Training Quantization (PTQ) fashion or requires an expensive Quantization-Aware Training (QAT) process. Recently, a new numeric format FP8 (i.e. floating-point of 8-bits) has been proposed and supported in commercial AI computing platforms such as H100. In this paper, we empirically validate the effectiveness of FP8 as a way to do Post-Training Quantization without significant loss of accuracy, with a simple calibration and format conversion process. We adopt the FP8 standard proposed by NVIDIA Corp. (2022) in our extensive experiments of BERT variants on GLUE and SQuAD v1.1 datasets, and show that PTQ with FP8 can significantly improve the accuracy upon that with INT8, to the extent of the full-precision model
    • …
    corecore