233 research outputs found

    Behavior-Based Price Discrimination under Endogenous Privacy

    Get PDF
    This paper analyzes consumers' privacy choice concerning their private data and firms' ensuing pricing strategy. The General Data Protection Regulation passed by the European Union in May 2018 allows consumers to decide whether to reveal private information in the form of cookies to an online seller. By incorporating this endogenous decision into a duopoly model with behavior-based pricing, we find two contrasting equilibria. Under revelation to both firms, consumers disclose their information. Under revelation to only one firm, consumers hide their information. Based on the model, we design a laboratory experiment. We find that there is a large share of consumers who reveal their private data. Particularly, less privacy-concerned subjects and subjects in the setting where only one firm receives information are more likely to reveal information

    Numerical convergence of pre-initial conditions on dark matter halo properties

    Get PDF
    Generating pre-initial conditions (or particle loads) is the very first step to set up a cosmological N-body simulation. In this work, we revisit the numerical convergence of pre-initial conditions on dark matter halo properties using a set of simulations which only differs in initial particle loads, i.e. grid, glass, and the newly introduced capacity constrained Voronoi tessellation (CCVT). We find that the median halo properties agree fairly well (i.e. within a convergence level of a few per cent) among simulations running from different initial loads. We also notice that for some individual haloes cross-matched among different simulations, the relative difference of their properties sometimes can be several tens of per cent. By looking at the evolution history of these poorly converged haloes, we find that they are usually merging haloes or haloes have experienced recent merger events, and their merging processes in different simulations are out-of-sync, making the convergence of halo properties become poor temporarily. We show that, comparing to the simulation starting with an anisotropic grid load, the simulation with an isotropic CCVT load converges slightly better to the simulation with a glass load, which is also isotropic. Among simulations with different pre-initial conditions, haloes in higher density environments tend to have their properties converged slightly better. Our results confirm that CCVT loads behave as well as the widely used grid and glass loads at small scales, and for the first time we quantify the convergence of two independent isotropic particle loads (i.e. glass and CCVT) on halo properties.Peer reviewe

    Tokenized Model: A Blockchain-Empowered Decentralized Model Ownership Verification Platform

    Full text link
    With the development of practical deep learning models like generative AI, their excellent performance has brought huge economic value. For instance, ChatGPT has attracted more than 100 million users in three months. Since the model training requires a lot of data and computing power, a well-performing deep learning model is behind a huge effort and cost. Facing various model attacks, unauthorized use and abuse from the network that threaten the interests of model owners, in addition to considering legal and other administrative measures, it is equally important to protect the model's copyright from the technical means. By using the model watermarking technology, we point out the possibility of building a unified platform for model ownership verification. Given the application history of blockchain in copyright verification and the drawbacks of a centralized third-party, this paper considers combining model watermarking technology and blockchain to build a unified model copyright protection platform. By a new solution we called Tokenized Model, it protects the model's copyright by reliable ownership record and verification mechanism. It also promotes the financial value of model by constructing the model's transaction process and contribution shares of a model. In the typical case study, we also study the various performance under usual scenario to verify the effectiveness of this platform

    FP8-BERT: Post-Training Quantization for Transformer

    Full text link
    Transformer-based models, such as BERT, have been widely applied in a wide range of natural language processing tasks. However, one inevitable side effect is that they require massive memory storage and inference cost when deployed in production. Quantization is one of the popularized ways to alleviate the cost. However, the previous 8-bit quantization strategy based on INT8 data format either suffers from the degradation of accuracy in a Post-Training Quantization (PTQ) fashion or requires an expensive Quantization-Aware Training (QAT) process. Recently, a new numeric format FP8 (i.e. floating-point of 8-bits) has been proposed and supported in commercial AI computing platforms such as H100. In this paper, we empirically validate the effectiveness of FP8 as a way to do Post-Training Quantization without significant loss of accuracy, with a simple calibration and format conversion process. We adopt the FP8 standard proposed by NVIDIA Corp. (2022) in our extensive experiments of BERT variants on GLUE and SQuAD v1.1 datasets, and show that PTQ with FP8 can significantly improve the accuracy upon that with INT8, to the extent of the full-precision model

    You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model

    Full text link
    Large-scale Transformer models bring significant improvements for various downstream vision language tasks with a unified architecture. The performance improvements come with increasing model size, resulting in slow inference speed and increased cost for severing. While some certain predictions benefit from the full complexity of the large-scale model, not all of inputs need the same amount of computation to conduct, potentially leading to computation resource waste. To handle this challenge, early exiting is proposed to adaptively allocate computational power in term of input complexity to improve inference efficiency. The existing early exiting strategies usually adopt output confidence based on intermediate layers as a proxy of input complexity to incur the decision of skipping following layers. However, such strategies cannot apply to encoder in the widely-used unified architecture with both encoder and decoder due to difficulty of output confidence estimation in the encoder. It is suboptimal in term of saving computation power to ignore the early exiting in encoder component. To handle this challenge, we propose a novel early exiting strategy for unified visual language models, which allows dynamically skip the layers in encoder and decoder simultaneously in term of input layer-wise similarities with multiple times of early exiting, namely \textbf{MuE}. By decomposing the image and text modalities in the encoder, MuE is flexible and can skip different layers in term of modalities, advancing the inference efficiency while minimizing performance drop. Experiments on the SNLI-VE and MS COCO datasets show that the proposed approach MuE can reduce expected inference time by up to 50\% and 40\% while maintaining 99\% and 96\% performance respectively

    Marketing Budget Allocation with Offline Constrained Deep Reinforcement Learning

    Full text link
    We study the budget allocation problem in online marketing campaigns that utilize previously collected offline data. We first discuss the long-term effect of optimizing marketing budget allocation decisions in the offline setting. To overcome the challenge, we propose a novel game-theoretic offline value-based reinforcement learning method using mixed policies. The proposed method reduces the need to store infinitely many policies in previous methods to only constantly many policies, which achieves nearly optimal policy efficiency, making it practical and favorable for industrial usage. We further show that this method is guaranteed to converge to the optimal policy, which cannot be achieved by previous value-based reinforcement learning methods for marketing budget allocation. Our experiments on a large-scale marketing campaign with tens-of-millions users and more than one billion budget verify the theoretical results and show that the proposed method outperforms various baseline methods. The proposed method has been successfully deployed to serve all the traffic of this marketing campaign.Comment: WSDM 23, Best Paper Candidat

    Performance of the suspension method in large cross-section shallow-buried tunnels

    Get PDF
    Large cross-section tunnel construction induces ground surface settlements, potentially endangering both subterranean projects and nearby above-ground structures. A novel tunnel construction method, known as the suspension method, is introduced in this paper to mitigate surface settlement. The suspension method employs vertical tie rods to establish a structural connection between the initial tunnel support system and the surface steel beam, thereby exerting effective control settlements. To analyze the performance of the proposed method, systematic numerical simulations were conducted based on the practical engineering of Harbin Subway Line 3. The surface settlement and vault settlement characteristics during construction are investigated. The results show a gradual increment in both surface and vault settlement throughout the construction process, culminating in a stabilized state upon the completion of construction. In addition, compared to the double-side drift method and the Cross Diaphragm Method (CRD) method, the suspension method can obviously reduce the surface settlement and vault settlement. Moreover, the surface settlements and the axial force of tie rods were continuously monitored during the construction process at the trial tunnel block. These specific monitoring measurements are illustrated in comparison to numerical analysis results. The monitored results show great agreement with the numerical predictions, confirming the success of the project. This research can serve as a valuable practical reference for similar projects, offering insights and guidance for addressing ground surface settlements and enhancing construction safety in the domain of large cross-section tunneling

    Numerical simulation study on suppression effect of water mist on PMMA combustion under external radiant heat flux

    Get PDF
    Numerical model was built with fire dynamic simulator and theocratical simulation was carried out to investigate the suppression effect of water mist on ignition and combustion process of typical solid material polymethyl methacrylate under external radiant heat flux. Characteristic parameters such as ignition time, surface temperature, heat release rate and temperature distribution of flame central plane during ignition and combustion process under different thermal radiant fluxes were obtained and compared with experimental results. The suppression effect of spray droplets on ignition and combustion process was analyzed and discussed. The results show the theoretical calculations of combustion characteristic parameters are in good agreement with experimental measurements. Water mist droplets can effectively delay the ignition time. Quantitative data proves that the water mist flow rate at 0.9 L/(min·m2) can delay the ignition time of samples by about 1,100 s while the radiant heat flux is 50 kW/m2. The simulation results can provide theoretical support and data reference for typical solid material fire prevention and fire extinguishment in practice
    corecore