12 research outputs found

    Sharp Error Bounds on Quantum Boolean Summation in Various Settings

    Get PDF
    We study the quantum summation (QS) algorithm of Brassard, Hoyer, Mosca and Tapp, that approximates the arithmetic mean of a Boolean function defined on N elements. We improve error bounds presented in [1] in the worst-probabilistic setting, and present new error bounds in the average-probabilistic setting. In particular, in the worst-probabilistic setting, we prove that the error of the QS algorithm using M1M - 1 queries is 3π/(4M)3\pi /(4M) with probability 8/π28/\pi^2, which improves the error bound πM1+π2M2\pi M^{-1} + \pi^2 M^{-2} of Brassard et al. We also present bounds with probabilities p(1/2,8/π2]p\in (1/2, 8/\pi^2] and show they are sharp for large MM and NM1NM^{-1}. In the average-probabilistic setting, we prove that the QS algorithm has error of order min{M1,N1/2}\min\{M^{-1}, N^{-1/2}\} if MM is divisible by 4. This bound is optimal, as recently shown in [10]. For M not divisible by 4, the QS algorithm is far from being optimal if MN1/2M \ll N^{1/2} since its error is proportional to M^{-1}^.Comment: 32 pages, 2 figure

    Quantum Algorithms and Complexity for Certain Continuous and Related Discrete Problems

    Get PDF
    The thesis contains an analysis of two computational problems. The first problem is discrete quantum Boolean summation. This problem is a building block of quantum algorithms for many continuous problems, such as integration, approximation, differential equations and path integration. The second problem is continuous multivariate Feynman-Kac path integration, which is a special case of path integration. The quantum Boolean summation problem can be solved by the quantum summation (QS) algorithm of Brassard, HíŸyer, Mosca and Tapp, which approximates the arithmetic mean of a Boolean function. We improve the error bound of Brassard et al. for the worst-probabilistic setting. Our error bound is sharp. We also present new sharp error bounds in the average-probabilistic and worst-average settings. Our average-probabilistic error bounds prove the optimality of the QS algorithm for a certain choice of its parameters. The study of the worst-average error shows that the QS algorithm is not optimal in this setting; we need to use a certain number of repetitions to regain its optimality. The multivariate Feynman-Kac path integration problem for smooth multivariate functions suffers from the provable curse of dimensionality in the worst-case deterministic setting, i.e., the minimal number of function evaluations needed to compute an approximation depends exponentially on the number of variables. We show that in both the randomized and quantum settings the curse of dimensionality is vanquished, i.e., the minimal number of function evaluations and/or quantum queries required to compute an approximation depends only polynomially on the reciprocal of the desired accuracy and has a bound independent of the number of variables. The exponents of these polynomials are 2 in the randomized setting and 1 in the quantum setting. These exponents can be lowered at the expense of the dependence on the number of variables. Hence, the quantum setting yields exponential speedup over the worst-case deterministic setting, and quadratic speedup over the randomized setting

    Forecasting Commodity Prices: Looking for a Benchmark

    No full text
    The random walk, no-change forecast is a customary benchmark in the literature on forecasting commodity prices. We challenge this custom by examining whether alternative models are more suited for this purpose. Based on a literature review and the results of two out-of-sample forecasting experiments, we draw two conclusions. First, in forecasting nominal commodity prices at shorter horizons, the random walk benchmark should be supplemented by futures-based forecasts. Second, in forecasting real commodity prices, the random walk benchmark should be supplemented, if not substituted, by forecasts from the local projection models. In both cases, the alternative benchmarks deliver forecasts of comparable and, in many cases, of superior accuracy

    Econometric analysis in competitive electricity markets

    No full text
    W pracy przedstawione są możliwości zastosowania metod ekonometrycznych do prognozowania cen na konkurencyjnym rynku energii elektrycznej w Polsce. Uwolnienie rynku sprawiło, że hurtowe ceny energii są w dużej części kształtowane przez grę rynkową, a oszacowanie ryzyka pozycji kontraktowej i zarządzanie nim wymaga sporządzania prognoz cen dla każdej godziny. Użyte metody muszą zapewnić nie tylko dokładność prognozy ale również wyznaczyć ją w rozsądnym czasie. W celu ilustracji i umotywowania tematyki badawczej, praca zawiera obszerne omówienie współczesnych rynków energii elektrycznej, w tym polskiego.The paper presents an application of econometric methods to modeling and predicting energy prices on competitive electricity markets. Since the beginning of market liberalization, electricity prices are no longer settled only by bilateral contracts but also driven by market forces of supply and demand. Price prediction became important to assess and manage market risk. This requires efficient algorithms for computing detailed hourly forecasts. In order to motivate and illustrate the subject we discuss the properties of competitive electricity markets, emphasizing Polish market specifics

    Worst case complexity of multivariate Feynman–Kac path integration

    Get PDF
    AbstractWe study the multivariate Feynman–Kac path integration problem. This problem was studied in Plaskota et al. (J. Comp. Phys. 164 (2000) 335) for the univariate case. We describe an algorithm based on uniform approximation, instead of the L2-approximation used in Plaskota et al. (2000). Similarly to Plaskota et al. (2000), our algorithm requires extensive precomputing. We also present bounds on the complexity of our problem. The lower bound is provided by the complexity of a certain integration problem, and the upper bound by the complexity of the uniform approximation problem. The algorithm presented in this paper is almost optimal for the classes of functions for which uniform approximation and integration have roughly the same complexities

    Boosting carry with equilibrium exchange rate estimates

    Full text link
    We build currency portfolios based on the paradigm that exchange rates slowly converge to their equilibrium to highlight three results. First, this property can be exploited to build profitable portfolios. Second, the slow pace of convergence at short-horizons is consistent with the evidence of profitable carry trade strategies, i.e. the common practice of borrowing in low-yield currencies and investing in high-yield currencies. Third, the predictive power of equilibrium exchange rates may boost the performance of carry trade strategies
    corecore