3,378 research outputs found

    Well Posedness of Operator Valued Backward Stochastic Riccati Equations in Infinite Dimensional Spaces

    Get PDF
    We prove existence and uniqueness of the mild solution of an infinite dimensional, operator valued, backward stochastic Riccati equation. We exploit the regularizing properties of the semigroup generated by the unbounded operator involved in the equation. Then the results will be applied to characterize the value function and optimal feedback law for a infinite dimensional, linear quadratic control problem with stochastic coefficients

    Stochastic maximum principle for optimal control of SPDEs

    Get PDF
    In this note, we give the stochastic maximum principle for optimal control of stochastic PDEs in the general case (when the control domain need not be convex and the diffusion coefficient can contain a control variable)

    A linear approach for sparse coding by a two-layer neural network

    Full text link
    Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neural network. Importantly, the linearity of SCNN and the choice of the error function allow one to achieve reduced running time in the learning phase. The proposed architecture is evaluated on the basis of two standard machine learning tasks. Its performances are compared with those of recently proposed non-linear auto-associative neural networks. The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase

    Stochastic Maximum Principle for Optimal Control ofPartial Differential Equations Driven by White Noise

    Full text link
    We prove a stochastic maximum principle ofPontryagin's type for the optimal control of a stochastic partial differential equationdriven by white noise in the case when the set of control actions is convex. Particular attention is paid to well-posedness of the adjoint backward stochastic differential equation and the regularity properties of its solution with values in infinite-dimensional spaces

    Ergodic BSDEs under weak dissipative assumptions

    Get PDF
    In this paper we study ergodic backward stochastic differential equations (EBSDEs) dropping the strong dissipativity assumption needed in the previous work. In other words we do not need to require the uniform exponential decay of the difference of two solutions of the underlying forward equation, which, on the contrary, is assumed to be non degenerate. We show existence of solutions by use of coupling estimates for a non-degenerate forward stochastic differential equations with bounded measurable non-linearity. Moreover we prove uniqueness of "Markovian" solutions exploiting the recurrence of the same class of forward equations. Applications are then given to the optimal ergodic control of stochastic partial differential equations and to the associated ergodic Hamilton-Jacobi-Bellman equations

    Economic Growth and the Environment with Clean and Dirty Consumption

    Get PDF
    This paper aims to verify the existence of the Environmental Kuznets Curve (EKC) or inverted U-shaped relationship between economic growth and environmental degradation in the context of endogenous growth. An important feature of this study is that the EKC is examined in the presence of pollution as a by product of consumption activities; also, pollution is a stock variable rather than a flow and tends to accumulate over time. In order to highlight the role of consumption on the environment, consumers do not consider directly pollution in the maximization problem and are assumed to choose between two different consumption types, characterized by a different impact on the environment (i.e. dirty and clean consumption). We find that substitution of dirty consumption with clean consumption alone is not sufficient to reduce environmental pollution. The result depends on the product differentiation and the cost to achieve it. From a social welfare perspective, more environmental awareness is unambiguously desirable when it generates less pollution. However, it could be that more environmental awareness leads to a lower level of social welfare depending on the costs of product differentiation and social marginal damage of pollution.Environmental Kuznets Curve, Economic Growth, Pollution, Consumption, Consumption behaviour

    On coupled systems of Kolmogorov equations with applications to stochastic differential games

    Full text link
    We prove that a family of linear bounded evolution operators (G(t,s))t≄s∈I({\bf G}(t,s))_{t\ge s\in I} can be associated, in the space of vector-valued bounded and continuous functions, to a class of systems of elliptic operators A\bm{\mathcal A} with unbounded coefficients defined in I\times \Rd (where II is a right-halfline or I=RI=\R) all having the same principal part. We establish some continuity and representation properties of (G(t,s))t≄s∈I({\bf G}(t,s))_{t \ge s\in I} and a sufficient condition for the evolution operator to be compact in C_b(\Rd;\R^m). We prove also a uniform weighted gradient estimate and some of its more relevant consequence

    ENG 1002G-035-046-065: Composition and Literature

    Get PDF

    ENG 1002G-016-049: Composition and Literature

    Get PDF
    • 

    corecore