30 research outputs found

    Deep Learning for Inflexible Multi-Asset Hedging of incomplete market

    Full text link
    Models trained under assumptions in the complete market usually don't take effect in the incomplete market. This paper solves the hedging problem in incomplete market with three sources of incompleteness: risk factor, illiquidity, and discrete transaction dates. A new jump-diffusion model is proposed to describe stochastic asset prices. Three neutral networks, including RNN, LSTM, Mogrifier-LSTM are used to attain hedging strategies with MSE Loss and Huber Loss implemented and compared.As a result, Mogrifier-LSTM is the fastest model with the best results under MSE and Huber Loss

    Language Models can be Logical Solvers

    Full text link
    Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical questions into symbolic representations first and then adopt external logical solvers to take in the symbolic representations and output the answers. Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of the external logical solver and no answer to the logical questions. In this paper, we introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers and bypasses the parsing errors by learning to strict adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers. Experimental results on two public deductive reasoning datasets demonstrate that LoGiPT outperforms state-of-the-art solver-augmented LMs and few-shot prompting methods on competitive LLMs like ChatGPT or GPT-4.Comment: Preprin

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Identical parallel machine scheduling with assurance of maximum waiting time for an emergency job

    No full text
    International audienceCurrently, customer satisfaction is playing an increasingly vital role in both manufacturing and service industries. Assuring an acceptable waiting time to customers is considered as an effective approach to improve customer satisfaction. In this study, an identical parallel machine scheduling problem assuring the maximum waiting time for an emergency job which arrives at any time is investigated. A mixed integer programming model is formulated, based on which a variant formulation is generated. The formulations are further enhanced by various techniques, which forms two formulation-based methods. Two objectives, makespan and total completion time, are considered separately. Regarding the makespan, the worst-case approximation ratios of the classical heuristic rules are deduced. For the total completion time, efficient bounds are provided and the NP-hardness of the problem is proved. Heuristic methods based on the classical dispatch rules are developed, for both the cases. Extensive computational experiments are conducted, based on which the performances of the formulation-based methods and heuristics are compared, the relationship between the objective values and the assured maximum waiting time for an emergency job is explored, and a few observations and managerial insights are obtained

    Unrelated parallel machine scheduling problem with special controllable processing times and setups

    No full text
    International audienceThe controllable processing times (CPTs) have many practical applications, enabling the length of the job processing to vary within an interval flexibly with additional costs. In this paper, we study an unrelated parallel machine scheduling problem with machine- and sequence-dependent setup times and a special case of CPTs without extra costs. The objective is to maximize the difference between the sum of realized state of processing of all jobs and makespan. To solve the problem, a mixed integer programming (MIP) model is formulated first and then a logic-based Benders decomposition (LBBD) method is developed, in which the master problem is for job assignments and the subproblem is furthered decomposed into a sequencing problem for minimum total setup times on each machine and a processing time determination problem. Several LBBD-based heuristics are also employed by imposing different optimality gaps for the master problem. The performance of the MIP formulation, the exact LBBD method and the LBBD-based heuristics is compared through extensive computational experiments. The results demonstrate that the exact LBBD method with the preprocessing procedure which generates initial feasible solutions and cuts is effective and the proposed LBBD-based heuristics reduce the computation time significantly at the expense of slightly solution quality reduction

    Variable neighborhood search-based methods for integrated hybrid flow shop scheduling with distribution

    No full text
    International audienceWith the rapid development of make-to-order pattern including E-commerce and takeout and catering service in restaurants, the study of integrated scheduling and distribution receives more and more attentions. Based on a practical order picking and distribution system, a three-stage hybrid flow shop scheduling problem with distribution is studied. Each order is processed on the hybrid flow shop which consists of identical parallel machines with sequence-dependent setup times at stage 1, identical parallel machines at stage 2 and dedicated machines at stage 3, followed by a multi-trip traveling salesman problem with capacitated vehicles for customers of different destination areas. A mixed-integer linear programming model is formulated to minimize the maximum delivery completion time. A variable neighborhood search (VNS)-based method, a four-layered constructive heuristic method (denoted by CHVNS) and a hybrid heuristic method (denoted by CONSVNS) which combines the VNS method and the CHVNS method are developed to solve the problems with practical size. Computational experiments show the effectiveness and efficiency of the proposed methods

    An exact decomposition method for unrelated parallel machine scheduling with order acceptance and setup times

    No full text
    In this paper, an unrelated parallel machine scheduling problem is studied, where order acceptance, sequence and machine-dependent setup times and the maximum available times of machines are additionally considered. The objective is to maximize the profit, which is the difference between the total revenues of accepted jobs and the cost associated with makespan. A mixed integer programming (MIP) model is formulated. To tackle this problem efficiently, an exact decomposition method, which is a two-layer logic-based Benders decomposition (LBBD) based (denoted by TL-LBBD) method, is developed, where an inner LBBD is embedded into an outer LBBD. Specifically, in the outer LBBD method, the master problem is used to determine the acceptance of jobs, whereas the subproblem examines the schedule given the accepted jobs from the master problem. The subproblem could be further decomposed into an assignment master problem and a sequencing subproblem by the inner LBBD method as well. Extensive computational experiments are conducted and the results show that the developed TL-LBBD method produce better quality solutions in significantly less computation time than solving the MIP model directly and the classic LBBD method. Moreover, the maximum scales of the problem instances that could be solved to optimality by the developed TL-LBBD method within 30 min are also evaluated

    An improved formulation and efficient heuristics for the discrete parallel-machine makespan ScheLoc problem

    No full text
    International audienceThe scheduling-location (ScheLoc) problem is a new and interesting field, which is a combination of two complex problems: the machine-location problem and the scheduling problem. Owing to the NP-hardness of both the component problems, the ScheLoc problem is naturally NP-hard. This study investigates a deterministic and discrete parallel-machine ScheLoc problem for minimizing the makespan. A new mixed integer programming formulation based on network flow problems is proposed. Two formulation-based heuristics are developed for small-scale problems. Subsequently, a polynomial-time heuristic is designed for efficiently solving large-scale problems. Extensive computational experiments are conducted for 1450 benchmark problem instances with different scales. The computational results show that our model can solve more problem instances to optimality than that in Heßler and Deghdak (2017) in the same time limit. In addition, the heuristics can yield near-optimal solutions for small-scale problems in a short time. The polynomial-time algorithm outperforms most of the state-of-the-art methods for the large-scale problems in terms of both the efficiency and solution quality
    corecore