853 research outputs found

    Nonlinear Analysis and Optimization with Applications

    Get PDF
    Nonlinear analysis has wide and significant applications in many areas of mathematics, including functional analysis, variational analysis, nonlinear optimization, convex analysis, nonlinear ordinary and partial differential equations, dynamical system theory, mathematical economics, game theory, signal processing, control theory, data mining, and so forth. Optimization problems have been intensively investigated, and various feasible methods in analyzing convergence of algorithms have been developed over the last half century. In this Special Issue, we will focus on the connection between nonlinear analysis and optimization as well as their applications to integrate basic science into the real world

    Dynamic programming with recursive preferences

    Get PDF
    There is now a considerable amount of research on the deficiencies of additively separable preferences for effective modelling of economically meaningful behaviour. Through analysis of observational data and the design of suitable experiments, economists have constructed progressively more realistic representations of agents and their choices. For intertemporal decisions, this typically involves a departure from the additively separable benchmark. A familiar example is the recursive preference framework of Epstein and Zin (1989), which has become central to the quantitative asset pricing literature, while also finding widespread use in applications range from optimal taxation to fiscal policy and business cycles. This thesis presents three essays which examine mathematical research questions within the context of recursive preferences and dynamic programming. The focus is particularly on showing existence and uniqueness of recursive utility processes under stationary and non-stationary consumption growth specifications, and on solving the closely related problem of optimality of dynamic programs with recursive preferences. On one hand, the thesis has been motivated by the availability of new and unexploited techniques for studying the aforementioned questions. The techniques in question primarily build upon an alternative version of the theory of monotone concave operators proposed by Du (1989, 1990). They are typically well suited to analysis of dynamic optimality with a variety of recursive preference specifications. On the other hand, motivation also comes from the demand side: while many useful results for dynamic programming within the context of recursive preferences have been obtained by existing literature, suitable results are still lacking for some of the most popular specifications for applied work, such as common parameterizations of the Epstein-Zin specification, or preference specifications that incorporate loss aversion and narrow framing into the Epstein-Zin framework, or the ambiguity sensitive preference specifications. In this connection, the thesis has sought to provide a new approach to dynamic optimality suitable for recursive preference specifications commonly used in modern economic analysis. The approach to examining the problems of dynamic programming exploits the theory of monotone convex operators, which, while less familiar than that of monotone concave operators, turns out to be well suited to dynamic maximization. The intuition is that convexity is preserved under maximization, while concavity is not. Meanwhile, concavity pairs well with minimization problems, since minimization preserves concavity. By applying this idea, a parallel theory for these two cases is established and it provides sufficient conditions that are easy to verify in applications

    A Sequential Empirical Central Limit Theorem for Multiple Mixing Processes with Application to B-Geometrically Ergodic Markov Chains

    Full text link
    We investigate the convergence in distribution of sequential empirical processes of dependent data indexed by a class of functions F. Our technique is suitable for processes that satisfy a multiple mixing condition on a space of functions which differs from the class F. This situation occurs in the case of data arising from dynamical systems or Markov chains, for which the Perron--Frobenius or Markov operator, respectively, has a spectral gap on a restricted space. We provide applications to iterative Lipschitz models that contract on average.Comment: Also available on http://ejp.ejpecp.org/article/view/3216. Note that the content of this version is identical to the one publisheb by "Electronic Journal of Probability". However, due to the use of different LaTeX-classes, the page number may diffe

    Inexact Fixed-Point Proximity Algorithms for Nonsmooth Convex Optimization

    Get PDF
    The aim of this dissertation is to develop efficient inexact fixed-point proximity algorithms with convergence guaranteed for nonsmooth convex optimization problems encountered in data science. Nonsmooth convex optimization is one of the core methodologies in data science to acquire knowledge from real-world data and has wide applications in various fields, including signal/image processing, machine learning and distributed computing. In particular, in the context of image reconstruction, compressed sensing and sparse machine learning, either the objective functions or the constraints of the modeling optimization problems are nondifferentiable. Hence, traditional methods such as the gradient descent method and the Newton method are not applicable since gradients of the objective functions or the constraints do not exist. Fixed-point proximity algorithms were developed via subdifferentials of the objective function to address the challenges. The theory of nonexpansive averaged operators was successfully employed in the existing analysis of exact/inexact fixed-point proximity algorithms for nonsmooth convex optimization. However, this framework has imposed restricted constraints on the algorithm formulation, which slows down the convergence and conceals relations between different algorithms. In this work, we characterize the solutions of convex optimization as fixed-points of certain operators, and then adopt the matrix splitting technique to obtain a framework of fully implicit fixed-point proximity algorithms. This results in a new class of quasiaveraged operators, which extends the class of nonexpansive averaged operators. Such framework covers and generalizes most of the existing popular algorithms for nonsmooth convex optimization. To deal with the implicitness of this framework, we follow the inspiration of the Schur’s lemma on the uniform boundedness of infinite matrices and propose a framework of inexact fixed-point iterations of quasiaveraged operators. This framework generalizes the inexact iterations of nonexpansive averaged operators. A combination of the frameworks of inexact fixed-point iterations and the implicit fixed-point proximity algorithms leads to the framework of inexact fixed-point proximity algorithms, which further extends existing methods for nonsmooth convex optimization. Numerical experiments on image deblurring problems demonstrate the advantages of inexact fixed-point proximity algorithms over existing explicit algorithms

    Matematiske aspekter ved lokalisert aktivitet i nevrofeltmodeller

    Get PDF
    Neural field models assume the form of integral and integro-differential equations, and describe non-linear interactions between neuron populations. Such models reduce the dimensionality and complexity of the microscopic neural-network dynamics and allow for mathematical treatment, efficient simulation and intuitive understanding. Since the seminal studies byWilson and Cowan (1973) and Amari (1977) neural field models have been used to describe phenomena like persistent neuronal activity, waves and pattern formation in the cortex. In the present thesis we focus on mathematical aspects of localized activity which is described by stationary solutions of a neural field model, so called bumps. While neural field models represent a considerable simplification of the neural dynamics in a large network, they are often studied under further simplifying assumptions, e.g., approximating the firing-rate function with a unit step function. In some cases these assumptions may not change essential features of the model, but in other cases they may cause some properties of the model to vary significantly or even break down. The work presented in the thesis aims at studying properties of bump solutions in one- and two-population models relaxing on the common simplifications. Numerical approaches used in mathematical neuroscience sometimes lack mathematical justification. This may lead to numerical instabilities, ill-conditioning or even divergence. Moreover, there are some methods which have not been used in neuroscience community but might be beneficial. We have initiated a work in this direction by studying advantages and disadvantages of a wavelet-Galerkin algorithm applied to a simplified framework of a one-population neural field model. We also focus on rigorous justification of iteration methods for constructing bumps. We use the theory of monotone operators in ordered Banach spaces, the theory of Sobolev spaces in unbounded domains, degree theory, and other functional analytical methods, which are still not very well developed in neuroscience, for analysis of the models.Nevrofeltmodeller formuleres som integral og integro-differensiallikninger. De beskriver ikke-lineære vekselvirkninger mellom populasjoner av nevroner. Slike modeller reduserer dimensjonalitet og kompleksitet til den mikroskopiske nevrale nettverksdynamikken og tillater matematisk behandling, effektiv simulering og intuitiv forståelse. Siden pionerarbeidene til Wilson og Cowan (1973) og Amari (1977), har nevrofeltmodeller blitt brukt til å beskrive fenomener som vedvarende nevroaktivitet, bølger og mønsterdannelse i hjernebarken. I denne avhandlingen vil vi fokusere på matematiske aspekter ved lokalisert aktivitet som beskrives ved stasjonære løsninger til nevrofeltmodeller, såkalte bumps. Mens nevrofeltmodeller innebærer en betydelig forenkling av den nevrale dynamikken i et større nettverk, så blir de ofte studert ved å gjøre forenklende tilleggsantakelser, som for eksempel å approksimere fyringratefunksjonen med en Heaviside-funksjon. I noen tilfeller vil disse forenklingene ikke endre vesentlige trekk ved modellen, mens i andre tilfeller kan de forårsake at modellegenskapene endres betydelig eller at de bryter sammen. Arbeidene presentert i denne avhandlingen har som mål å studere egenskapene til bump-løsninger i en- og to-populasjonsmodeller når en lemper på de vanlige antakelsene. Numeriske teknikker som brukes i matematisk nevrovitenskap mangler i noen tilfeller matematisk begrunnelse. Dette kan lede til numeriske instabiliteter, dårlig kondisjonering, og til og med divergens. I tillegg finnes det metoder som ikke er blitt brukt i nevrovitenskap, men som kunne være fordelaktige å bruke. Vi har startet et arbeid i denne retningen ved å studere fordeler og ulemper ved en wavelet-Galerkin algoritme anvendt på et forenklet rammeverk for en en-populasjons nevrofelt modell. Vi fokuserer også på rigorøs begrunnelse for iterasjonsmetoder for konstruksjon av bumps. Vi bruker teorien for monotone operatorer i ordnede Banachrom, teorien for Sobolevrom for ubegrensede domener, gradteori, og andre funksjonalanalytiske metoder, som for tiden ikke er vel utviklet i nevrovitenskap, for analyse av modellene

    A survey on stationary problems, Green's functions and spectrum of Sturm–Liouville problem with nonlocal boundary conditions

    Get PDF
    In this paper, we present a survey of recent results on the Green's functions and on spectrum for stationary problems with nonlocal boundary conditions. Results of Lithuanian mathematicians in the field of differential and numerical problems with nonlocal boundary conditions are described. *The research was partially supported by the Research Council of Lithuania (grant No. MIP-047/2014)

    List of contents

    Get PDF

    Adaptive Algorithms

    Get PDF
    Overwhelming empirical evidence in computational science and engineering proved that self-adaptive mesh-generation is a must-do in real-life problem computational partial differential equations. The mathematical understanding of corresponding algorithms concerns the overlap of two traditional mathematical disciplines, numerical analysis and approximation theory, with computational sciences. The half workshop was devoted to the mathematics of optimal convergence rates and instance optimality of the Dörfler marking or the maximum strategy in various versions of space discretisations and time-evolution problems with all kind of applications in the efficient numerical treatment of partial differential equations
    • …
    corecore