206,897 research outputs found

    Solution of Real Cubic Equations without Cardano's Formula

    Full text link
    Building on a classification of zeros of cubic equations due to the 1212-th century Persian mathematician Sharaf al-Din Tusi, together with Smale's theory of {\it point estimation}, we derive an efficient recipe for computing high-precision approximation to a real root of an arbitrary real cubic equation. First, via reversible transformations we reduce any real cubic equation into one of four canonical forms with 00, ±1\pm 1 coefficients, except for the constant term as ±q\pm q, q0q \geq 0. Next, given any form, if ρq\rho_q is an approximation to q3\sqrt[3]{q} to within a relative error of five percent, we prove a {\it seed} x0x_0 in {ρq,±.95ρq,13,1}\{ \rho_q, \pm .95 \rho_q, -\frac{1}{3}, 1 \} can be selected such that in tt Newton iterations xtθqq322t|x_t - \theta_q| \leq \sqrt[3]{q}\cdot 2^{-2^{t}} for some real root θq\theta_q. While computing a good seed, even for approximation of q3\sqrt[3]{q}, is considered to be ``somewhat of black art'' (see Wikipedia), as we justify, ρq\rho_q is readily computable from {\it mantissa} and {\it exponent} of qq. It follows that the above approach gives a simple recipe for numerical approximation of solutions of real cubic equations independent of Cardano's formula.Comment: 9 page

    Some Root Finding With Extensions to Higher Dimensions

    Get PDF
    Root finding is an issue in scientific computing. Because most nonlinear problems in science and engineering can be considered as the root finding problems, directly or indirectly. The research in numerical modeling for root finding is still going on. In this study, fixed point iterative methods for solving simple real roots of nonlinear equations, which improve convergence of some existing methods, are thorough. Derivative estimations up to the third order (in root finding, some recent ideas) are applied in Taylor’s approximation of a nonlinear equation by a cubic model to achieve efficient iterative methods. We may also discuss possible extensions to two dimensions and consider Newton’s method and Halley’s method in 1D and 2D problem solving. Several examples for test of efficiency and convergence analyses using C++ are offered. And some engineering applications of root finding are conferred. Graphical demonstrations are supported with matlab basic tools. Keywords: engineering applications, derivative estimations, iterative methods, simple roots, Taylor’s approximation

    A Scalable Algorithm For Sparse Portfolio Selection

    Full text link
    The sparse portfolio selection problem is one of the most famous and frequently-studied problems in the optimization and financial economics literatures. In a universe of risky assets, the goal is to construct a portfolio with maximal expected return and minimum variance, subject to an upper bound on the number of positions, linear inequalities and minimum investment constraints. Existing certifiably optimal approaches to this problem do not converge within a practical amount of time at real world problem sizes with more than 400 securities. In this paper, we propose a more scalable approach. By imposing a ridge regularization term, we reformulate the problem as a convex binary optimization problem, which is solvable via an efficient outer-approximation procedure. We propose various techniques for improving the performance of the procedure, including a heuristic which supplies high-quality warm-starts, a preprocessing technique for decreasing the gap at the root node, and an analytic technique for strengthening our cuts. We also study the problem's Boolean relaxation, establish that it is second-order-cone representable, and supply a sufficient condition for its tightness. In numerical experiments, we establish that the outer-approximation procedure gives rise to dramatic speedups for sparse portfolio selection problems.Comment: Submitted to INFORMS Journal on Computin

    On the Complexity of Real Root Isolation

    Full text link
    We introduce a new approach to isolate the real roots of a square-free polynomial F=i=0nAixiF=\sum_{i=0}^n A_i x^i with real coefficients. It is assumed that each coefficient of FF can be approximated to any specified error bound. The presented method is exact, complete and deterministic. Due to its similarities to the Descartes method, we also consider it practical and easy to implement. Compared to previous approaches, our new method achieves a significantly better bit complexity. It is further shown that the hardness of isolating the real roots of FF is exclusively determined by the geometry of the roots and not by the complexity or the size of the coefficients. For the special case where FF has integer coefficients of maximal bitsize τ\tau, our bound on the bit complexity writes as O~(n3τ2)\tilde{O}(n^3\tau^2) which improves the best bounds known for existing practical algorithms by a factor of n=degFn=deg F. The crucial idea underlying the new approach is to run an approximate version of the Descartes method, where, in each subdivision step, we only consider approximations of the intermediate results to a certain precision. We give an upper bound on the maximal precision that is needed for isolating the roots of FF. For integer polynomials, this bound is by a factor nn lower than that of the precision needed when using exact arithmetic explaining the improved bound on the bit complexity

    Simple and Nearly Optimal Polynomial Root-finding by Means of Root Radii Approximation

    Full text link
    We propose a new simple but nearly optimal algorithm for the approximation of all sufficiently well isolated complex roots and root clusters of a univariate polynomial. Quite typically the known root-finders at first compute some crude but reasonably good approximations to well-conditioned roots (that is, those isolated from the other roots) and then refine the approximations very fast, by using Boolean time which is nearly optimal, up to a polylogarithmic factor. By combining and extending some old root-finding techniques, the geometry of the complex plane, and randomized parametrization, we accelerate the initial stage of obtaining crude to all well-conditioned simple and multiple roots as well as isolated root clusters. Our algorithm performs this stage at a Boolean cost dominated by the nearly optimal cost of subsequent refinement of these approximations, which we can perform concurrently, with minimum processor communication and synchronization. Our techniques are quite simple and elementary; their power and application range may increase in their combination with the known efficient root-finding methods.Comment: 12 pages, 1 figur

    Approximating the Permanent of a Random Matrix with Vanishing Mean

    Full text link
    We show an algorithm for computing the permanent of a random matrix with vanishing mean in quasi-polynomial time. Among special cases are the Gaussian, and biased-Bernoulli random matrices with mean 1/lnln(n)^{1/8}. In addition, we can compute the permanent of a random matrix with mean 1/poly(ln(n)) in time 2^{O(n^{\eps})} for any small constant \eps>0. Our algorithm counters the intuition that the permanent is hard because of the "sign problem" - namely the interference between entries of a matrix with different signs. A major open question then remains whether one can provide an efficient algorithm for random matrices of mean 1/poly(n), whose conjectured #P-hardness is one of the baseline assumptions of the BosonSampling paradigm
    corecore