25 research outputs found

    Generating Linear programming Instances with Controllable Rank and Condition Number

    Full text link
    Instances generation is crucial for linear programming algorithms, which is necessary either to find the optimal pivot rules by training learning method or to evaluate and verify corresponding algorithms. This study proposes a general framework for designing linear programming instances based on the preset optimal solution. First, we give a constraint matrix generation method with controllable condition number and rank from the perspective of matrix decomposition. Based on the preset optimal solution, the bounded feasible linear programming instance is generated with the right-hand side and objective coefficient satisfying the original and dual feasibility. In addition, we provide three kind of neighborhood exchange operators and prove that instances generated under this method can fill the whole feasible and bounded case space of linear programming. We experimentally validate that the proposed schedule can generate more controllable linear programming instances, while neighborhood exchange operator can construct more complex instances.Comment: 28 page

    Applying Opponent Modeling for Automatic bidding in Online Repeated Auctions

    Full text link
    Online auction scenarios, such as bidding searches on advertising platforms, often require bidders to participate repeatedly in auctions for the same or similar items. We design an algorithm for adaptive automatic bidding in repeated auctions in which the seller and other bidders also update their strategies. We apply and improve the opponent modeling algorithm to allow bidders to learn optimal bidding strategies in this multiagent reinforcement learning environment. The algorithm uses almost no private information about the opponent or restrictions on the strategy space, so it can be extended to multiple scenarios. Our algorithm improves the utility compared to both static bidding strategies and dynamic learning strategies. We hope the application of opponent modeling in auctions will promote the research of automatic bidding strategies in online auctions and the design of non-incentive compatible auction mechanisms

    Energy Losses and Voltage Stability Study in Distribution Network with Distributed Generation

    Get PDF
    With the distributed generation technology widely applied, some system problems such as overvoltages and undervoltages are gradually remarkable, which are caused by distributed generations like wind energy system (WES) and photovoltaic system (PVS) because of their probabilistic output power which relied on natural conditions. Since the impacts of WES and PVS are important in the distribution system voltage quality, we study these in this paper using new models with the probability density function of node voltage and the cumulative distribution function of total losses. We apply these models to solve the IEEE33 distribution system to be chosen in IEEE standard database. We compare our method with the Monte Carlo simulation method in three different cases, respectively. In the three cases, these results not only can provide the important reference information for the next stage optimization design, system reliability, and safety analysis but also can reduce amount of calculation

    On the optimal pivot path of simplex method for linear programming based on reinforcement learning

    Full text link
    Based on the existing pivot rules, the simplex method for linear programming is not polynomial in the worst case. Therefore the optimal pivot of the simplex method is crucial. This study proposes the optimal rule to find all shortest pivot paths of the simplex method for linear programming problems based on Monte Carlo tree search (MCTS). Specifically, we first propose the SimplexPseudoTree to transfer the simplex method into tree search mode while avoiding repeated basis variables. Secondly, we propose four reinforcement learning (RL) models with two actions and two rewards to make the Monte Carlo tree search suitable for the simplex method. Thirdly, we set a new action selection criterion to ameliorate the inaccurate evaluation in the initial exploration. It is proved that when the number of vertices in the feasible region is CnmC_n^m, our method can generate all the shortest pivot paths, which is the polynomial of the number of variables. In addition, we experimentally validate that the proposed schedule can avoid unnecessary search and provide the optimal pivot path. Furthermore, this method can provide the best pivot labels for all kinds of supervised learning methods to solve linear programming problems.Comment: 38 page

    DiffBFR: Bootstrapping Diffusion Model Towards Blind Face Restoration

    Full text link
    Blind face restoration (BFR) is important while challenging. Prior works prefer to exploit GAN-based frameworks to tackle this task due to the balance of quality and efficiency. However, these methods suffer from poor stability and adaptability to long-tail distribution, failing to simultaneously retain source identity and restore detail. We propose DiffBFR to introduce Diffusion Probabilistic Model (DPM) for BFR to tackle the above problem, given its superiority over GAN in aspects of avoiding training collapse and generating long-tail distribution. DiffBFR utilizes a two-step design, that first restores identity information from low-quality images and then enhances texture details according to the distribution of real faces. This design is implemented with two key components: 1) Identity Restoration Module (IRM) for preserving the face details in results. Instead of denoising from pure Gaussian random distribution with LQ images as the condition during the reverse process, we propose a novel truncated sampling method which starts from LQ images with part noise added. We theoretically prove that this change shrinks the evidence lower bound of DPM and then restores more original details. With theoretical proof, two cascade conditional DPMs with different input sizes are introduced to strengthen this sampling effect and reduce training difficulty in the high-resolution image generated directly. 2) Texture Enhancement Module (TEM) for polishing the texture of the image. Here an unconditional DPM, a LQ-free model, is introduced to further force the restorations to appear realistic. We theoretically proved that this unconditional DPM trained on pure HQ images contributes to justifying the correct distribution of inference images output from IRM in pixel-level space. Truncated sampling with fractional time step is utilized to polish pixel-level textures while preserving identity information

    Towards Consistent Video Editing with Text-to-Image Diffusion Models

    Full text link
    Existing works have advanced Text-to-Image (TTI) diffusion models for video editing in a one-shot learning manner. Despite their low requirements of data and computation, these methods might produce results of unsatisfied consistency with text prompt as well as temporal sequence, limiting their applications in the real world. In this paper, we propose to address the above issues with a novel EI2^2 model towards \textbf{E}nhancing v\textbf{I}deo \textbf{E}diting cons\textbf{I}stency of TTI-based frameworks. Specifically, we analyze and find that the inconsistent problem is caused by newly added modules into TTI models for learning temporal information. These modules lead to covariate shift in the feature space, which harms the editing capability. Thus, we design EI2^2 to tackle the above drawbacks with two classical modules: Shift-restricted Temporal Attention Module (STAM) and Fine-coarse Frame Attention Module (FFAM). First, through theoretical analysis, we demonstrate that covariate shift is highly related to Layer Normalization, thus STAM employs a \textit{Instance Centering} layer replacing it to preserve the distribution of temporal features. In addition, {STAM} employs an attention layer with normalized mapping to transform temporal features while constraining the variance shift. As the second part, we incorporate {STAM} with a novel {FFAM}, which efficiently leverages fine-coarse spatial information of overall frames to further enhance temporal consistency. Extensive experiments demonstrate the superiority of the proposed EI2^2 model for text-driven video editing

    Parallel Variable Distribution Algorithm for Constrained Optimization with Nonmonotone Technique

    Get PDF
    A modified parallel variable distribution (PVD) algorithm for solving large-scale constrained optimization problems is developed, which modifies quadratic subproblem QPl at each iteration instead of the QPl0 of the SQP-type PVD algorithm proposed by C. A. Sagastizábal and M. V. Solodov in 2002. The algorithm can circumvent the difficulties associated with the possible inconsistency of QPl0 subproblem of the original SQP method. Moreover, we introduce a nonmonotone technique instead of the penalty function to carry out the line search procedure with more flexibly. Under appropriate conditions, the global convergence of the method is established. In the final part, parallel numerical experiments are implemented on CUDA based on GPU (Graphics Processing unit)

    Understanding Oversmoothing in Diffusion-Based GNNs From the Perspective of Operator Semigroup Theory

    Full text link
    This paper presents a novel study of the oversmoothing issue in diffusion-based Graph Neural Networks (GNNs). Diverging from extant approaches grounded in random walk analysis or particle systems, we approach this problem through operator semigroup theory. This theoretical framework allows us to rigorously prove that oversmoothing is intrinsically linked to the ergodicity of the diffusion operator. This finding further poses a general and mild ergodicity-breaking condition, encompassing the various specific solutions previously offered, thereby presenting a more universal and theoretically grounded approach to mitigating oversmoothing in diffusion-based GNNs. Additionally, we offer a probabilistic interpretation of our theory, forging a link with prior works and broadening the theoretical horizon. Our experimental results reveal that this ergodicity-breaking term effectively mitigates oversmoothing measured by Dirichlet energy, and simultaneously enhances performance in node classification tasks
    corecore