23 research outputs found

    On Gap Functions for Quasi-Variational Inequalities

    Get PDF
    For variational inequalities, various merit functions, such as the gap function, the regularized gap function, the D-gap function and so on, have been proposed. These functions lead to equivalent optimization formulations and are used to optimization-based methods for solving variational inequalities. In this paper, we extend the regularized gap function and the D-gap functions for a quasi-variational inequality, which is a generalization of the variational inequality and is used to formulate generalized equilibrium problems. These extensions are shown to formulate equivalent optimization problems for quasi-variational inequalities and are shown to be continuous and directionally differentiable

    Generalized Low-Rank Update: Model Parameter Bounds for Low-Rank Training Data Modifications

    Full text link
    In this study, we have developed an incremental machine learning (ML) method that efficiently obtains the optimal model when a small number of instances or features are added or removed. This problem holds practical importance in model selection, such as cross-validation (CV) and feature selection. Among the class of ML methods known as linear estimators, there exists an efficient model update framework called the low-rank update that can effectively handle changes in a small number of rows and columns within the data matrix. However, for ML methods beyond linear estimators, there is currently no comprehensive framework available to obtain knowledge about the updated solution within a specific computational complexity. In light of this, our study introduces a method called the Generalized Low-Rank Update (GLRU) which extends the low-rank update framework of linear estimators to ML methods formulated as a certain class of regularized empirical risk minimization, including commonly used methods such as SVM and logistic regression. The proposed GLRU method not only expands the range of its applicability but also provides information about the updated solutions with a computational complexity proportional to the amount of dataset changes. To demonstrate the effectiveness of the GLRU method, we conduct experiments showcasing its efficiency in performing cross-validation and feature selection compared to other baseline methods

    Efficient Model Selection for Predictive Pattern Mining Model by Safe Pattern Pruning

    Full text link
    Predictive pattern mining is an approach used to construct prediction models when the input is represented by structured data, such as sets, graphs, and sequences. The main idea behind predictive pattern mining is to build a prediction model by considering substructures, such as subsets, subgraphs, and subsequences (referred to as patterns), present in the structured data as features of the model. The primary challenge in predictive pattern mining lies in the exponential growth of the number of patterns with the complexity of the structured data. In this study, we propose the Safe Pattern Pruning (SPP) method to address the explosion of pattern numbers in predictive pattern mining. We also discuss how it can be effectively employed throughout the entire model building process in practical data analysis. To demonstrate the effectiveness of the proposed method, we conduct numerical experiments on regression and classification problems involving sets, graphs, and sequences

    Parametric excitation-based inverse bending gait generation

    Get PDF
    In a gait generation method based on the parametric excitation principle, appropriate motion of the center of mass restores kinetic energy lost by heel strike. The motion is realized by bending and stretching a swing-leg regardless of bending direction. In this paper, we first show that inverse bending restores more mechanical energy than forward bending, and then propose a parametric excitation-based inverse bending gait for a kneed biped robot, which improves gait efficiency of parametric excitation walking

    変分不等式問題に対する最適化に基づいた解法の研究

    Get PDF
    本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである京都大学0048新制・論文博士博士(工学)乙第9380号論工博第3163号新制||工||1055(附属図書館)UT51-97-B317(主査)教授 茨木 俊秀, 教授 福嶋 雅夫, 教授 足立 紀彦学位規則第4条第2項該当Doctor of EngineeringKyoto UniversityDFA

    A NOTE ON GLOBALLY CONVERGENT NEWTON METHOD FOR STRONGLY MONOTONE VARIATIONAL INEQUALITIES

    No full text
    Abstract Newton’s method for solving variational inequalities is known to be locally quadratically convergent. By incorporating a line search strategy for the regularized gap function, Taji et al. (Mathematical Programming, 1993) have proposed a modification of a Newton’s method which is globally convergent and whose rate of convergence is quadratic. But the quadratic convergence has been shown only under the assumptions that the constraint set is polyhedral convex and the strict complementarity condition holds at the solution. In this paper, we show that the quadratic rate of convergence is also achieved without both the polyhedral convex assumption and the strict complementarity condition. Moreover, the line search procedure is simplified
    corecore