8 research outputs found

    A Characterization Theorem and An Algorithm for A Convex Hull Problem

    Full text link
    Given S={v1,,vn}RmS= \{v_1, \dots, v_n\} \subset \mathbb{R} ^m and pRmp \in \mathbb{R} ^m, testing if pconv(S)p \in conv(S), the convex hull of SS, is a fundamental problem in computational geometry and linear programming. First, we prove a Euclidean {\it distance duality}, distinct from classical separation theorems such as Farkas Lemma: pp lies in conv(S)conv(S) if and only if for each pconv(S)p' \in conv(S) there exists a {\it pivot}, vjSv_j \in S satisfying d(p,vj)d(p,vj)d(p',v_j) \geq d(p,v_j). Equivalently, p∉conv(S)p \not \in conv(S) if and only if there exists a {\it witness}, pconv(S)p' \in conv(S) whose Voronoi cell relative to pp contains SS. A witness separates pp from conv(S)conv(S) and approximate d(p,conv(S))d(p, conv(S)) to within a factor of two. Next, we describe the {\it Triangle Algorithm}: given ϵ(0,1)\epsilon \in (0,1), an {\it iterate}, pconv(S)p' \in conv(S), and vSv \in S, if d(p,p)<ϵd(p,v)d(p, p') < \epsilon d(p,v), it stops. Otherwise, if there exists a pivot vjv_j, it replace vv with vjv_j and pp' with the projection of pp onto the line pvjp'v_j. Repeating this process, the algorithm terminates in O(mnmin{ϵ2,c1lnϵ1})O(mn \min \{\epsilon^{-2}, c^{-1}\ln \epsilon^{-1} \}) arithmetic operations, where cc is the {\it visibility factor}, a constant satisfying cϵ2c \geq \epsilon^2 and sin(ppvj)1/1+c\sin (\angle pp'v_j) \leq 1/\sqrt{1+c}, over all iterates pp'. Additionally, (i) we prove a {\it strict distance duality} and a related minimax theorem, resulting in more effective pivots; (ii) describe O(mnlnϵ1)O(mn \ln \epsilon^{-1})-time algorithms that may compute a witness or a good approximate solution; (iii) prove {\it generalized distance duality} and describe a corresponding generalized Triangle Algorithm; (iv) prove a {\it sensitivity theorem} to analyze the complexity of solving LP feasibility via the Triangle Algorithm. The Triangle Algorithm is practical and competitive with the simplex method, sparse greedy approximation and first-order methods.Comment: 42 pages, 17 figures, 2 tables. This revision only corrects minor typo

    A Family of Iteration Functions for General Linear Systems

    Full text link
    We develop novel theory and algorithms for computing approximate solution to Ax=bAx=b, or to ATAx=ATbA^TAx=A^Tb, where AA is an m×nm \times n real matrix of arbitrary rank. First, we describe the {\it Triangle Algorithm} (TA), where given an ellipsoid EA,ρ={Ax:xρ}E_{A,\rho}=\{Ax: \Vert x \Vert \leq \rho\}, in each iteration it either computes successively improving approximation bk=AxkEA,ρb_k=Ax_k \in E_{A,\rho}, or proves b∉EA,ρb \not \in E_{A, \rho}. We then extend TA for computing an approximate solution or minimum-norm solution. Next, we develop a dynamic version of TA, the {\it Centering Triangle Algorithm} (CTA), generating residuals rk=bAxkr_k=b - Ax_k via iterations of the simple formula, F1(r)=r(rTHr/rTH2r)HrF_1(r)=r-(r^THr/r^TH^2r)Hr, where H=AH=A when AA is symmetric PSD, otherwise H=AATH=AA^T but need not be computed explicitly. More generally, CTA extends to a family of iteration function, Ft(r)F_t( r), t=1,,mt=1, \dots, m satisfying: On the one hand, given tmt \leq m and r0=bAx0r_0=b-Ax_0, where x0=ATw0x_0=A^Tw_0 with w0Rmw_0 \in \mathbb{R}^m arbitrary, for all k1k \geq 1, rk=Ft(rk1)=bAxkr_k=F_t(r_{k-1})=b-Ax_k and ATrkA^Tr_k converges to zero. Algorithmically, if HH is invertible with condition number κ\kappa, in k=O((κ/t)lnε1)k=O( (\kappa/t) \ln \varepsilon^{-1}) iterations rkε\Vert r_k \Vert \leq \varepsilon. If HH is singular with κ+\kappa^+ the ratio of its largest to smallest positive eigenvalues, in k=O(κ+/tε)k =O(\kappa^+/t\varepsilon) iterations either rkε\Vert r_k \Vert \leq \varepsilon or ATrk=O(ε)\Vert A^T r_k\Vert= O(\sqrt{\varepsilon}). If NN is the number of nonzero entries of AA, each iteration take O(Nt+t3)O(Nt+t^3) operations. On the other hand, given r0=bAx0r_0=b-Ax_0, suppose its minimal polynomial with respect to HH has degree ss. Then Ax=bAx=b is solvable if and only if Fs(r0)=0F_{s}(r_0)=0. Moreover, exclusively ATAx=ATbA^TAx=A^Tb is solvable, if and only if Fs(r0)0F_{s}(r_0) \not= 0 but ATFs(r0)=0A^T F_s(r_0)=0. Additionally, {Ft(r0)}t=1s\{F_t(r_0)\}_{t=1}^s is computable in O(Ns+s3)O(Ns+s^3) operations.Comment: 59 pages, 4 figure
    corecore