380,331 research outputs found

    Bounded low and high sets

    Get PDF
    Anderson and Csima (Notre Dame J Form Log 55(2):245–264, 2014) defined a jump operator, the bounded jump, with respect to bounded Turing (or weak truth table) reducibility. They showed that the bounded jump is closely related to the Ershov hierarchy and that it satisfies an analogue of Shoenfield jump inversion. We show that there are high bounded low sets and low bounded high sets. Thus, the information coded in the bounded jump is quite different from that of the standard jump. We also consider whether the analogue of the Jump Theorem holds for the bounded jump: do we have A ≤bT B if and only if Ab ≤1 Bb ? We show the forward direction holds but not the reverse

    Element sets for high-order Poincar\'e mapping of perturbed Keplerian motion

    Get PDF
    The propagation and Poincar\'e mapping of perturbed Keplerian motion is a key topic in celestial mechanics and astrodynamics, e.g. to study the stability of orbits or design bounded relative trajectories. The high-order transfer map (HOTM) method enables efficient mapping of perturbed Keplerian orbits over many revolutions. For this, the method uses the high-order Taylor expansion of a Poincar\'e or stroboscopic map, which is accurate close to the expansion point. In this paper, we investigate the performance of the HOTM method using different element sets for building the high-order map. The element sets investigated are the classical orbital elements, modified equinoctial elements, Hill variables, cylindrical coordinates and Deprit's ideal elements. The performances of the different coordinate sets are tested by comparing the accuracy and efficiency of mapping low-Earth and highly-elliptical orbits perturbed by J2J_2 with numerical propagation. The accuracy of HOTM depends strongly on the choice of elements and type of orbit. A new set of elements is introduced that enables extremely accurate mapping of the state, even for high eccentricities and higher-order zonal perturbations. Finally, the high-order map is shown to be very useful for the determination and study of fixed points and centre manifolds of Poincar\'e maps.Comment: Pre-print of journal articl

    Quantitative observability for one-dimensional Schr\"odinger equations with potentials

    Full text link
    In this note, we prove the quantitative observability with an explicit control cost for the 1D Schr\"odinger equation over R\mathbb{R} with real-valued, bounded continuous potential on thick sets. Our proof relies on different techniques for low-frequency and high-frequency estimates. In particular, we extend the large time observability result for the 1D free Schrodinger equation in Theorem 1.1 of Huang-Wang-Wang [20] to any short time. As another byproduct, we extend the spectral inequality of Lebeau-Moyano [27] for real-analytic potentials to bounded continuous potentials in the one-dimensional case.Comment: 26 pages, comments are welcom

    Fast Algorithms at Low Temperatures via Markov Chains

    Get PDF
    For spin systems, such as the hard-core model on independent sets weighted by fugacity lambda>0, efficient algorithms for the associated approximate counting/sampling problems typically apply in the high-temperature region, corresponding to low fugacity. Recent work of Jenssen, Keevash and Perkins (2019) yields an FPTAS for approximating the partition function (and an efficient sampling algorithm) on bounded-degree (bipartite) expander graphs for the hard-core model at sufficiently high fugacity, and also the ferromagnetic Potts model at sufficiently low temperatures. Their method is based on using the cluster expansion to obtain a complex zero-free region for the partition function of a polymer model, and then approximating this partition function using the polynomial interpolation method of Barvinok. We present a simple discrete-time Markov chain for abstract polymer models, and present an elementary proof of rapid mixing of this new chain under sufficient decay of the polymer weights. Applying these general polymer results to the hard-core and ferromagnetic Potts models on bounded-degree (bipartite) expander graphs yields fast algorithms with running time O(n log n) for the Potts model and O(n^2 log n) for the hard-core model, in contrast to typical running times of n^{O(log Delta)} for algorithms based on Barvinok\u27s polynomial interpolation method on graphs of maximum degree Delta. In addition, our approach via our polymer model Markov chain is conceptually simpler as it circumvents the zero-free analysis and the generalization to complex parameters. Finally, we combine our results for the hard-core and ferromagnetic Potts models with standard Markov chain comparison tools to obtain polynomial mixing time for the usual spin system Glauber dynamics restricted to even and odd or "red" dominant portions of the respective state spaces

    Small Width, Low Distortions: Quantized Random Embeddings of Low-complexity Sets

    Full text link
    Under which conditions and with which distortions can we preserve the pairwise-distances of low-complexity vectors, e.g., for structured sets such as the set of sparse vectors or the one of low-rank matrices, when these are mapped in a finite set of vectors? This work addresses this general question through the specific use of a quantized and dithered random linear mapping which combines, in the following order, a sub-Gaussian random projection in RM\mathbb R^M of vectors in RN\mathbb R^N, a random translation, or "dither", of the projected vectors and a uniform scalar quantizer of resolution δ>0\delta>0 applied componentwise. Thanks to this quantized mapping we are first able to show that, with high probability, an embedding of a bounded set K⊂RN\mathcal K \subset \mathbb R^N in δZM\delta \mathbb Z^M can be achieved when distances in the quantized and in the original domains are measured with the ℓ1\ell_1- and ℓ2\ell_2-norm, respectively, and provided the number of quantized observations MM is large before the square of the "Gaussian mean width" of K\mathcal K. In this case, we show that the embedding is actually "quasi-isometric" and only suffers of both multiplicative and additive distortions whose magnitudes decrease as M−1/5M^{-1/5} for general sets, and as M−1/2M^{-1/2} for structured set, when MM increases. Second, when one is only interested in characterizing the maximal distance separating two elements of K\mathcal K mapped to the same quantized vector, i.e., the "consistency width" of the mapping, we show that for a similar number of measurements and with high probability this width decays as M−1/4M^{-1/4} for general sets and as 1/M1/M for structured ones when MM increases. Finally, as an important aspect of our work, we also establish how the non-Gaussianity of the mapping impacts the class of vectors that can be embedded or whose consistency width provably decays when MM increases.Comment: Keywords: quantization, restricted isometry property, compressed sensing, dimensionality reduction. 31 pages, 1 figur

    Restricted strong convexity and weighted matrix completion: Optimal bounds with noise

    Full text link
    We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an MM-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with â„“q\ell_q-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal

    Optimisation of Low-Thrust and Hybrid Earth-Moon Transfers

    Get PDF
    This paper presents an optimization procedure to generate fast and low-∆v Earth-Moon transfer trajectories, by exploiting the multi-body dynamics of the Sun-Earth-Moon system. Ideal (first-guess) trajectories are generated at first, using two coupled planar circular restricted three-body problems, one representing the Earth-Moon system, and one representing the Sun-Earth. The trajectories consist of a first ballistic arc in the Sun-Earth system, and a second ballistic arc in the Earth-Moon system. The two are connected at a patching point at one end (with an instantaneous ∆v), and they are bounded at Earth and Moon respectively at the other end. Families of these trajectories are found by means of an evolutionary optimization method. Subsequently, they are used as first-guess for solving an optimal control problem, in which the full three-dimensional 4-body problem is introduced and the patching point is set free. The objective of the optimisation is to reduce the total ∆v, and the time of flight, together with introducing the constraints on the transfer boundary conditions and of the considered propulsion technology. Sets of different optimal trajectories are presented, which represents trade-off options between ∆v and time of flight. These optimal transfers include conventional solar-electric low-thrust and hybrid chemical/solar-electric high/low-thrust, envisaging future spacecraft that can carry both systems. A final comparison is made between the optimal transfers found and only chemical high-thrust optimal solutions retrieved from literature

    Simulation and Estimation of Loss Given Default

    Get PDF
    The aim of our paper is the development of an adequate estimation model for the loss given default, which incorporates the empirically observed bimodality and bounded nature of the distribution. Therefore we introduce an adjusted Expectation Maximization algorithm to estimate the parameters of a univariate mixture distribution, consisting of two beta distributions. Subsequently these estimations are compared with the Maximum Likelihood estimators to test the efficiency and accuracy of both algorithms. Furthermore we analyze our derived estimation model with estimation models proposed in the literature on a synthesized loan portfolio. The simulated loan portfolio consists of possibly loss-influencing parameters that are merged with loss given default observations via a quasi-random approach. Our results show that our proposed model exhibits more accurate loss given default estimators than the benchmark models for different simulated data sets comprising obligor-specific parameters with either high predictive power or low predictive power for the loss given default.Bimodality, EM Algorithm, Loss Given Default, Maximum Likelihood, Mixture Distribution, Portfolio Simulation
    • …
    corecore