45 research outputs found

    BB: An R Package for Solving a Large System of Nonlinear Equations and for Optimizing a High-Dimensional Nonlinear Objective Function

    Get PDF
    We discuss <code>R</code> package <b>BB</b>, in particular, its capabilities for solving a nonlinear system of equations. The function <code>BBsolve</code> in <b>BB</b> can be used for this purpose. We demonstrate the utility of these functions for solving: (a) large systems of nonlinear equations, (b) smooth, nonlinear estimating equations in statistical modeling, and (c) non-smooth estimating equations arising in rank-based regression modeling of censored failure time data. The function <code>BBoptim</code> can be used to solve smooth, box-constrained optimization problems. A main strength of <b>BB</b> is that, due to its low memory and storage requirements, it is ideally suited for solving high-dimensional problems with thousands of variables

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm

    Total variation based community detection using a nonlinear optimization approach

    Get PDF
    Maximizing the modularity of a network is a successful tool to identify an important community of nodes. However, this combinatorial optimization problem is known to be NP-complete. Inspired by recent nonlinear modularity eigenvector approaches, we introduce the modularity total variation TVQTV_Q and show that its box-constrained global maximum coincides with the maximum of the original discrete modularity function. Thus we describe a new nonlinear optimization approach to solve the equivalent problem leading to a community detection strategy based on TVQTV_Q. The proposed approach relies on the use of a fast first-order method that embeds a tailored active-set strategy. We report extensive numerical comparisons with standard matrix-based approaches and the Generalized RatioDCA approach for nonlinear modularity eigenvectors, showing that our new method compares favourably with state-of-the-art alternatives

    Localization and security algorithms for wireless sensor networks and the usage of signals of opportunity

    Get PDF
    In this dissertation we consider the problem of localization of wireless devices in environments and applications where GPS (Global Positioning System) is not a viable option. The _x000C_rst part of the dissertation studies a novel positioning system based on narrowband radio frequency (RF) signals of opportunity, and develops near optimum estimation algorithms for localization of a mobile receiver. It is assumed that a reference receiver (RR) with known position is available to aid with the positioning of the mobile receiver (MR). The new positioning system is reminiscent of GPS and involves two similar estimation problems. The _x000C_rst is localization using estimates of time-di_x000B_erence of arrival (TDOA). The second is TDOA estimation based on the received narrowband signals at the RR and the MR. In both cases near optimum estimation algorithms are developed in the sense of maximum likelihood estimation (MLE) under some mild assumptions, and both algorithms compute approximate MLEs in the form of a weighted least-squares (WLS) solution. The proposed positioning system is illustrated with simulation studies based on FM radio signals. The numerical results show that the position errors are comparable to those of other positioning systems, including GPS. Next, we present a novel algorithm for localization of wireless sensor networks (WSNs) called distributed randomized gradient descent (DRGD), and prove that in the case of noise-free distance measurements, the algorithm converges and provides the true location of the nodes. For noisy distance measurements, the convergence properties of DRGD are discussed and an error bound on the location estimation error is obtained. In contrast to several recently proposed methods, DRGD does not require that blind nodes be contained in the convex hull of the anchor nodes, and can accurately localize the network with only a few anchors. Performance of DRGD is evaluated through extensive simulations and compared with three other algorithms, namely the relaxation-based second order cone programming (SOCP), the simulated annealing (SA), and the semi-de_x000C_nite programing (SDP) procedures. Similar to DRGD, SOCP and SA are distributed algorithms, whereas SDP is centralized. The results show that DRGD successfully localizes the nodes in all the cases, whereas in many cases SOCP and SA fail. We also present a modi_x000C_cation of DRGD for mobile WSNs and demonstrate the e_x000E_cacy of DRGD for localization of mobile networks with several simulation results. We then extend this method for secure localization in the presence of outlier distance measurements or distance spoo_x000C_ng attacks. In this case we present a centralized algorithm to estimate the position of the nodes in WSNs, where outlier distance measurements may be present
    corecore