6,012 research outputs found

    Expert systems and finite element structural analysis - a review

    Get PDF
    Finite element analysis of many engineering systems is practised more as an art than as a science . It involves high level expertise (analytical as well as heuristic) regarding problem modelling (e .g. problem specification,13; choosing the appropriate type of elements etc .), optical mesh design for achieving the specified accuracy (e .g . initial mesh selection, adaptive mesh refinement), selection of the appropriate type of analysis and solution13; routines and, finally, diagnosis of the finite element solutions . Very often such expertise is highly dispersed and is not available at a single place with a single expert. The design of an expert system, such that the necessary expertise is available to a novice to perform the same job even in the absence of trained experts, becomes an attractive proposition. 13; In this paper, the areas of finite element structural analysis which require experience and decision-making capabilities are explored . A simple expert system, with a feasible knowledge base for problem modelling, optimal mesh design, type of analysis and solution routines, and diagnosis, is outlined. Several efforts in these directions, reported in the open literature, are also reviewed in this paper

    Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems

    Full text link
    In this paper we propose and analyze two dual methods based on inexact gradient information and averaging that generate approximate primal solutions for smooth convex optimization problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first order methods based on approximate gradients and we prove sublinear rate of convergence for these methods. In particular, we provide, for the first time, estimates on the primal feasibility violation and primal and dual suboptimality of the generated approximate primal and dual solutions. Moreover, we solve approximately the inner problems with a parallel coordinate descent algorithm and we show that it has linear convergence rate. In our analysis we rely on the Lipschitz property of the dual function and inexact dual gradients. Further, we apply these methods to distributed model predictive control for network systems. By tightening the complicating constraints we are also able to ensure the primal feasibility of the approximate solutions generated by the proposed algorithms. We obtain a distributed control strategy that has the following features: state and input constraints are satisfied, stability of the plant is guaranteed, whilst the number of iterations for the suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure

    Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

    Full text link
    We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ϵ \epsilon -approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.Comment: 32 page

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    A Subsampling Line-Search Method with Second-Order Results

    Full text link
    In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust region or a line search. Using subsampled function values is particularly challenging for the latter strategy, which relies upon multiple evaluations. On top of that all, there has been an increasing interest for nonconvex formulations of data-related problems, such as training deep learning models. For such instances, one aims at developing methods that converge to second-order stationary points quickly, i.e., escape saddle points efficiently. This is particularly delicate to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this paper, we describe a stochastic algorithm based on negative curvature and Newton-type directions that are computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model, and for a sufficiently large sample, a similar amount of reduction holds for the true objective. By using probabilistic reasoning, we can then obtain worst-case complexity guarantees for our framework, leading us to discuss appropriate notions of stationarity in a subsampling context. Our analysis encompasses the deterministic regime, and allows us to identify sampling requirements for second-order line-search paradigms. As we illustrate through real data experiments, these worst-case estimates need not be satisfied for our method to be competitive with first-order strategies in practice
    corecore