151 research outputs found

    Weak solutions for forward--backward SDEs--a martingale problem approach

    Full text link
    In this paper, we propose a new notion of Forward--Backward Martingale Problem (FBMP), and study its relationship with the weak solution to the forward--backward stochastic differential equations (FBSDEs). The FBMP extends the idea of the well-known (forward) martingale problem of Stroock and Varadhan, but it is structured specifically to fit the nature of an FBSDE. We first prove a general sufficient condition for the existence of the solution to the FBMP. In the Markovian case with uniformly continuous coefficients, we show that the weak solution to the FBSDE (or equivalently, the solution to the FBMP) does exist. Moreover, we prove that the uniqueness of the FBMP (whence the uniqueness of the weak solution) is determined by the uniqueness of the viscosity solution of the corresponding quasilinear PDE.Comment: Published in at http://dx.doi.org/10.1214/08-AOP0383 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    m-Government in China: Observations and Reflections

    Get PDF
    Mobile and wireless technologies (MWTs), such as wireless laptop computers, personal digital assistants (PDA), mobile phones, smart phones, etc., have deeply penetrated our lives. Government agencies use MWTs to enhance their managerial effectiveness and provide high-level services to citizens taking advantage of its characteristics of mobility, ubiquity, provision of other location-based government services, and on-time information delivery. Mobile government (m- Government) is forming diversely within (as well as between) different countries. China currently has 738.57 million mobile phone users and 29 cities are deploying “Wireless City” projects. Within this context, we chose six different cities in China to examine m-Government maturity and assess the deployment of m-Government services. We further explored mobile and wireless technology (MWT) application and implications in conjunction with a special project in Beijing. Results are discussed and conclusions are drawn

    ECM-OPCC: Efficient Context Model for Octree-based Point Cloud Compression

    Full text link
    Recently, deep learning methods have shown promising results in point cloud compression. For octree-based point cloud compression, previous works show that the information of ancestor nodes and sibling nodes are equally important for predicting current node. However, those works either adopt insufficient context or bring intolerable decoding complexity (e.g. >600s). To address this problem, we propose a sufficient yet efficient context model and design an efficient deep learning codec for point clouds. Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context while maintaining decoding efficiency. Then, we propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings. We also propose a random-masking pre-train method to enhance our model. Experimental results show that our approach achieves state-of-the-art performance for both lossy and lossless point cloud compression. Moreover, our multi-group coding strategy saves 98% decoding time compared with previous octree-based compression method

    THiFLY Research at SemEval-2023 Task 7: A Multi-granularity System for CTR-based Textual Entailment and Evidence Retrieval

    Full text link
    The NLI4CT task aims to entail hypotheses based on Clinical Trial Reports (CTRs) and retrieve the corresponding evidence supporting the justification. This task poses a significant challenge, as verifying hypotheses in the NLI4CT task requires the integration of multiple pieces of evidence from one or two CTR(s) and the application of diverse levels of reasoning, including textual and numerical. To address these problems, we present a multi-granularity system for CTR-based textual entailment and evidence retrieval in this paper. Specifically, we construct a Multi-granularity Inference Network (MGNet) that exploits sentence-level and token-level encoding to handle both textual entailment and evidence retrieval tasks. Moreover, we enhance the numerical inference capability of the system by leveraging a T5-based model, SciFive, which is pre-trained on the medical corpus. Model ensembling and a joint inference method are further utilized in the system to increase the stability and consistency of inference. The system achieves f1-scores of 0.856 and 0.853 on textual entailment and evidence retrieval tasks, resulting in the best performance on both subtasks. The experimental results corroborate the effectiveness of our proposed method. Our code is publicly available at https://github.com/THUMLP/NLI4CT.Comment: Accepted by SemEval202

    Towards provably efficient quantum algorithms for large-scale machine-learning models

    Full text link
    Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. In this work, we show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms, scaling as O(T2×polylog(n))\mathcal{O}(T^2 \times \text{polylog}(n)), where nn is the size of the models and TT is the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse, with small learning rates. Based on earlier efficient quantum algorithms for dissipative differential equations, we find and prove that similar algorithms work for (stochastic) gradient descent, the primary algorithm for machine learning. In practice, we benchmark instances of large machine learning models from 7 million to 103 million parameters. We find that, in the context of sparse training, a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows solidly that fault-tolerant quantum algorithms could potentially contribute to most state-of-the-art, large-scale machine-learning problems.Comment: 7+30 pages, 3+5 figure

    Towards provably efficient quantum algorithms for large-scale machine-learning models

    Get PDF
    Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. In this work, we show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms, scaling as O(T2 x polylog(n)), where n is the size of the models and T is the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse, with small learning rates. Based on earlier efficient quantum algorithms for dissipative differential equations, we find and prove that similar algorithms work for (stochastic) gradient descent, the primary algorithm for machine learning. In practice, we benchmark instances of large machine learning models from 7 million to 103 million parameters. We find that, in the context of sparse training, a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows solidly that fault-tolerant quantum algorithms could potentially contribute to most state-of-the-art, large-scale machine-learning problems

    Zigzag magnetic order in a novel tellurate compound Na4δ_{4-\delta}NiTeO6_{6} with S\mathit{S} = 1 chains

    Full text link
    Na4δ_{4-\delta}NiTeO6_{6} is a rare example in the transition-metal tellurate family of realizing an SS = 1 spin-chain structure. By performing neutron powder diffraction measurements, the ground-state magnetic structure of Na4δ_{4-\delta}NiTeO6_{6} is determined. These measurements reveal that below TNT\rm_{N} {\sim} 6.8(2) K, the Ni2+^{2+} moments form a screwed ferromagnetic (FM) spin-chain structure running along the crystallographic aa axis but these FM spin chains are coupled antiferromagnetically along the bb and cc directions, giving rise to a magnetic propagation vector of kk = (0, 1/2, 1/2). This zigzag magnetic order is well supported by first-principles calculations. The moment size of Ni2+^{2+} spins is determined to be 2.1(1) μ\muB\rm_{B} at 3 K, suggesting a significant quenching of the orbital moment due to the crystalline electric field (CEF) effect. The previously reported metamagnetic transition near HCH\rm_{C} {\sim} 0.1 T can be understood as a field-induced spin-flip transition. The relatively easy tunability of the dimensionality of its magnetism by external parameters makes Na4δ_{4-\delta}NiTeO6_{6} a promising candidate for further exploring various types of novel spin-chain physics.Comment: 10 pages, 6 figure
    corecore