131 research outputs found
Iterative algorithms for solutions of nonlinear equations in Banach spaces.
Doctoral Degree. University of KwaZulu-Natal, Durban.Abstract available in PDF
Iterative schemes for numerical reckoning of fixed points of new nonexpansive mappings with an application
The goal of this manuscript is to introduce a new class of generalized nonexpansive operators, called (α,β,γ)-nonexpansive mappings. Furthermore, some related properties of these mappings are investigated in a general Banach space. Moreover, the proposed operators utilized in the K-iterative technique estimate the fixed point and examine its behavior. Also, two examples are provided to support our main results. The numerical results clearly show that the K-iterative approach converges more quickly when used with this new class of operators. Ultimately, we used the K-type iterative method to solve a variational inequality problem on a Hilbert space
A modified proximal point algorithm in geodesic metric space
Proximal point algorithm is one of the most popular technique to find either zero of monotone operator or minimizer of a lower semi-continuous function. In this paper, we propose a new modified proximal point algorithm for solving minimization problems and common fixed point problems in CAT(0) spaces. We prove Δ and strong convergence of the proposed algorithm. Our results extend and improve the corresponding recent results in the literature
Theory and Application of Fixed Point
In the past few decades, several interesting problems have been solved using fixed point theory. In addition to classical ordinary differential equations and integral equation, researchers also focus on fractional differential equations (FDE) and fractional integral equations (FIE). Indeed, FDE and FIE lead to a better understanding of several physical phenomena, which is why such differential equations have been highly appreciated and explored. We also note the importance of distinct abstract spaces, such as quasi-metric, b-metric, symmetric, partial metric, and dislocated metric. Sometimes, one of these spaces is more suitable for a particular application. Fixed point theory techniques in partial metric spaces have been used to solve classical problems of the semantic and domain theory of computer science. This book contains some very recent theoretical results related to some new types of contraction mappings defined in various types of spaces. There are also studies related to applications of the theoretical findings to mathematical models of specific problems, and their approximate computations. In this sense, this book will contribute to the area and provide directions for further developments in fixed point theory and its applications
Self-adaptive inertial algorithms for approximating solutions of split feasilbility, monotone inclusion, variational inequality and fixed point problems.
Masters Degree. University of KwaZulu-Natal, Durban.In this dissertation, we introduce a self-adaptive hybrid inertial algorithm for approximating
a solution of split feasibility problem which also solves a monotone inclusion problem
and a fixed point problem in p-uniformly convex and uniformly smooth Banach spaces.
We prove a strong convergence theorem for the sequence generated by our algorithm which
does not require a prior knowledge of the norm of the bounded linear operator. Numerical
examples are given to compare the computational performance of our algorithm with other
existing algorithms.
Moreover, we present a new iterative algorithm of inertial form for solving Monotone Inclusion
Problem (MIP) and common Fixed Point Problem (FPP) of a finite family of
demimetric mappings in a real Hilbert space. Motivated by the Armijo line search technique,
we incorporate the inertial technique to accelerate the convergence of the proposed
method. Under standard and mild assumptions of monotonicity and Lipschitz continuity
of the MIP associated mappings, we establish the strong convergence of the iterative
algorithm. Some numerical examples are presented to illustrate the performance of our
method as well as comparing it with the non-inertial version and some related methods in
the literature.
Furthermore, we propose a new modified self-adaptive inertial subgradient extragradient
algorithm in which the two projections are made onto some half spaces. Moreover, under
mild conditions, we obtain a strong convergence of the sequence generated by our proposed
algorithm for approximating a common solution of variational inequality problems
and common fixed points of a finite family of demicontractive mappings in a real Hilbert
space. The main advantages of our algorithm are: strong convergence result obtained
without prior knowledge of the Lipschitz constant of the the related monotone operator,
the two projections made onto some half-spaces and the inertial technique which speeds
up rate of convergence. Finally, we present an application and a numerical example to
illustrate the usefulness and applicability of our algorithm
A study of optimization and fixed point problems in certain geodesic metric spaces.
Doctoral Degree. University of KwaZulu-Natal, Durban.Abstract available in PDF
Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods
The convex feasibility problem (CFP) is at the core of the modeling of many
problems in various areas of science. Subgradient projection methods are
important tools for solving the CFP because they enable the use of subgradient
calculations instead of orthogonal projections onto the individual sets of the
problem. Working in a real Hilbert space, we show that the sequential
subgradient projection method is perturbation resilient. By this we mean that
under appropriate conditions the sequence generated by the method converges
weakly, and sometimes also strongly, to a point in the intersection of the
given subsets of the feasibility problem, despite certain perturbations which
are allowed in each iterative step. Unlike previous works on solving the convex
feasibility problem, the involved functions, which induce the feasibility
problem's subsets, need not be convex. Instead, we allow them to belong to a
wider and richer class of functions satisfying a weaker condition, that we call
"zero-convexity". This class, which is introduced and discussed here, holds a
promise to solve optimization problems in various areas, especially in
non-smooth and non-convex optimization. The relevance of this study to
approximate minimization and to the recent superiorization methodology for
constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio
- …