17,201 research outputs found

    Optimization Based Self-localization for IoT Wireless Sensor Networks

    Get PDF
    In this paper we propose an embedded optimization framework for the simultaneous self-localization of all sensors in wireless sensor networks making use of range measurements from ultra-wideband (UWB) signals. Low-power UWB radios, which provide time-of-arrival measurements with decimeter accuracy over large distances, have been increasingly envisioned for realtime localization of IoT devices in GPS-denied environments and large sensor networks. In this work, we therefore explore different non-linear least-squares optimization problems to formulate the localization task based on UWB range measurements. We solve the resulting optimization problems directly using non-linear-programming algorithms that guarantee convergence to locally optimal solutions. This optimization framework allows the consistent comparison of different optimization methods for sensor localization. We propose and demonstrate the best optimization approach for the self-localization of sensors equipped with off-the-shelf microcontrollers using state-of-the-art code generation techniques for the plug-and-play deployment of the optimal localization algorithm. Numerical results indicate that the proposed approach improves localization accuracy and decreases computation times relative to existing iterative methods

    CoCoA: A General Framework for Communication-Efficient Distributed Optimization

    Get PDF
    The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets

    A multi-solver quasi-Newton method for the partitioned simulation of fluid-structure interaction

    Get PDF
    In partitioned fluid-structure interaction simulations, the flow equations and the structural equations are solved separately. Consequently, the stresses and displacements on both sides of the fluid-structure interface are not automatically in equilibrium. Coupling techniques like Aitken relaxation and the Interface Block Quasi-Newton method with approximate Jacobians from Least-Squares models (IBQN-LS) enforce this equilibrium, even with black-box solvers. However, all existing coupling techniques use only one flow solver and one structural solver. To benefit from the large number of multi-core processors in modern clusters, a new Multi-Solver Interface Block Quasi-Newton (MS-IBQN-LS) algorithm has been developed. This algorithm uses more than one flow solver and structural solver, each running in parallel on a number of cores. One-dimensional and three-dimensional numerical experiments demonstrate that the run time of a simulation decreases as the number of solvers increases, albeit at a slower pace. Hence, the presented multi-solver algorithm accelerates fluid-structure interaction calculations by increasing the number of solvers, especially when the run time does not decrease further if more cores are used per solver

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin
    • …
    corecore