4,264 research outputs found

    Quadratic BSDEs with convex generators and unbounded terminal conditions

    Get PDF
    In a previous work, we proved an existence result for BSDEs with quadratic generators with respect to the variable z and with unbounded terminal conditions. However, no uniqueness result was stated in that work. The main goal of this paper is to fill this gap. In order to obtain a comparison theorem for this kind of BSDEs, we assume that the generator is convex with respect to the variable z. Under this assumption of convexity, we are also able to prove a stability result in the spirit of the a priori estimates stated in the article of N. El Karoui, S. Peng and M.-C. Quenez. With these tools in hands, we can derive the nonlinear Feynman--Kac formula in this context

    Normally ordered forms of powers of differential operators and their combinatorics

    Get PDF
    We investigate the combinatorics of the general formulas for the powers of the operator h∂k, where h is a central element of a ring and ∂ is a differential operator. This generalizes previous work on the powers of operators h∂. New formulas for the generalized Stirling numbers are obtained.Ministerio de Economía y competitividad MTM2016-75024-PJunta de Andalucía P12-FQM-2696Junta de Andalucía FQM–33

    Entanglement monotones and maximally entangled states in multipartite qubit systems

    Full text link
    We present a method to construct entanglement measures for pure states of multipartite qubit systems. The key element of our approach is an antilinear operator that we call {\em comb} in reference to the {\em hairy-ball theorem}. For qubits (or spin 1/2) the combs are automatically invariant under SL(2,\CC). This implies that the {\em filters} obtained from the combs are entanglement monotones by construction. We give alternative formulae for the concurrence and the 3-tangle as expectation values of certain antilinear operators. As an application we discuss inequivalent types of genuine four-, five- and six-qubit entanglement.Comment: 7 pages, revtex4. Talk presented at the Workshop on "Quantum entanglement in physical and information sciences", SNS Pisa, December 14-18, 200

    LTM: Scalable and Black-box Similarity-based Test Suite Minimization based on Language Models

    Full text link
    Test suites tend to grow when software evolves, making it often infeasible to execute all test cases with the allocated testing budgets, especially for large software systems. Therefore, test suite minimization (TSM) is employed to improve the efficiency of software testing by removing redundant test cases, thus reducing testing time and resources, while maintaining the fault detection capability of the test suite. Most of the TSM approaches rely on code coverage (white-box) or model-based features, which are not always available for test engineers. Recent TSM approaches that rely only on test code (black-box) have been proposed, such as ATM and FAST-R. To address scalability, we propose LTM (Language model-based Test suite Minimization), a novel, scalable, and black-box similarity-based TSM approach based on large language models (LLMs). To support similarity measurement, we investigated three different pre-trained language models: CodeBERT, GraphCodeBERT, and UniXcoder, to extract embeddings of test code, on which we computed two similarity measures: Cosine Similarity and Euclidean Distance. Our goal is to find similarity measures that are not only computationally more efficient but can also better guide a Genetic Algorithm (GA), thus reducing the overall search time. Experimental results, under a 50% minimization budget, showed that the best configuration of LTM (using UniXcoder with Cosine similarity) outperformed the best two configurations of ATM in three key facets: (a) achieving a greater saving rate of testing time (40.38% versus 38.06%, on average); (b) attaining a significantly higher fault detection rate (0.84 versus 0.81, on average); and, more importantly, (c) minimizing test suites much faster (26.73 minutes versus 72.75 minutes, on average) in terms of both preparation time (up to two orders of magnitude faster) and search time (one order of magnitude faster)
    • …
    corecore