797 research outputs found

    The Value of Information Concealment

    Full text link
    We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3, if the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value conditioning on the others' is drawn according to a regular distribution. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions

    VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet

    Full text link
    Recently, diffusion models like StableDiffusion have achieved impressive image generation results. However, the generation process of such diffusion models is uncontrollable, which makes it hard to generate videos with continuous and consistent content. In this work, by using the diffusion model with ControlNet, we proposed a new motion-guided video-to-video translation framework called VideoControlNet to generate various videos based on the given prompts and the condition from the input video. Inspired by the video codecs that use motion information for reducing temporal redundancy, our framework uses motion information to prevent the regeneration of the redundant areas for content consistency. Specifically, we generate the first frame (i.e., the I-frame) by using the diffusion model with ControlNet. Then we generate other key frames (i.e., the P-frame) based on the previous I/P-frame by using our newly proposed motion-guided P-frame generation (MgPG) method, in which the P-frames are generated based on the motion information and the occlusion areas are inpainted by using the diffusion model. Finally, the rest frames (i.e., the B-frame) are generated by using our motion-guided B-frame interpolation (MgBI) module. Our experiments demonstrate that our proposed VideoControlNet inherits the generation capability of the pre-trained large diffusion model and extends the image diffusion model to the video diffusion model by using motion information. More results are provided at our project page

    Neighborhood VAR: Efficient estimation of multivariate timeseries with neighborhood information

    Full text link
    In data science, vector autoregression (VAR) models are popular in modeling multivariate time series in the environmental sciences and other applications. However, these models are computationally complex with the number of parameters scaling quadratically with the number of time series. In this work, we propose a so-called neighborhood vector autoregression (NVAR) model to efficiently analyze large-dimensional multivariate time series. We assume that the time series have underlying neighborhood relationships, e.g., spatial or network, among them based on the inherent setting of the problem. When this neighborhood information is available or can be summarized using a distance matrix, we demonstrate that our proposed NVAR method provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study

    FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec

    Full text link
    This paper presents FunCodec, a fundamental neural speech codec toolkit, which is an extension of the open-source speech processing toolkit FunASR. FunCodec provides reproducible training recipes and inference scripts for the latest neural speech codec models, such as SoundStream and Encodec. Thanks to the unified design with FunASR, FunCodec can be easily integrated into downstream tasks, such as speech recognition. Along with FunCodec, pre-trained models are also provided, which can be used for academic or generalized purposes. Based on the toolkit, we further propose the frequency-domain codec models, FreqCodec, which can achieve comparable speech quality with much lower computation and parameter complexity. Experimental results show that, under the same compression ratio, FunCodec can achieve better reconstruction quality compared with other toolkits and released models. We also demonstrate that the pre-trained models are suitable for downstream tasks, including automatic speech recognition and personalized text-to-speech synthesis. This toolkit is publicly available at https://github.com/alibaba-damo-academy/FunCodec.Comment: 5 pages, 3 figures, submitted to ICASSP 202

    Random Order Vertex Arrival Contention Resolution Schemes for Matching, with Applications

    Get PDF
    • …
    corecore