86 research outputs found

    Deep learning for gradient flows using the Brezis–Ekeland principle

    Get PDF
    summary:We propose a deep learning method for the numerical solution of partial differential equations that arise as gradient flows. The method relies on the Brezis–Ekeland principle, which naturally defines an objective function to be minimized, and so is ideally suited for a machine learning approach using deep neural networks. We describe our approach in a general framework and illustrate the method with the help of an example implementation for the heat equation in space dimensions two to seven

    Attainability of the fractional Hardy constant with nonlocal mixed boundary conditions. Applications

    Full text link
    The first goal of this paper is to study necessary and sufficient conditions to obtain the attainability of the \textit{fractional Hardy inequality } ΛNΛN(Ω):=inf{ϕEs(Ω,D),ϕ0}ad,s2RdRdϕ(x)ϕ(y)2xyd+2sdxdyΩϕ2x2sdx,\Lambda_{N}\equiv\Lambda_{N}(\Omega):=\inf_{\{\phi\in \mathbb{E}^s(\Omega, D), \phi\neq 0\}} \dfrac{\frac{a_{d,s}}{2} \displaystyle\int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \dfrac{|\phi(x)-\phi(y)|^2}{|x-y|^{d+2s}}dx dy} {\displaystyle\int_\Omega \frac{\phi^2}{|x|^{2s}}\,dx}, where Ω\Omega is a bounded domain of Rd\mathbb{R}^d, 0<s<10<s<1, DRdΩD\subset \mathbb{R}^d\setminus \Omega a nonempty open set and Es(Ω,D)={uHs(Rd):u=0 in D}.\mathbb{E}^{s}(\Omega,D)=\left\{ u \in H^s(\mathbb{R}^d):\, u=0 \text{ in } D\right\}. The second aim of the paper is to study the \textit{mixed Dirichlet-Neumann boundary problem} associated to the minimization problem and related properties; precisely, to study semilinear elliptic problem for the \textit{fractional laplacian}, that is, Pλ{(Δ)su=λux2s+up in Ω,u>0 in Ω,Bsu:=uχD+NsuχN=0 in Rd\Ω,P_{\lambda} \, \equiv \left\{ \begin{array}{rcll} (-\Delta)^s u &= & \lambda \dfrac{u}{|x|^{2s}} +u^p & {\text{ in }}\Omega, u & > & 0 &{\text{ in }} \Omega, \mathcal{B}_{s}u&:=&u\chi_{D}+\mathcal{N}_{s}u\chi_{N}=0 &{\text{ in }}\mathbb{R}^{d}\backslash \Omega, \\ \end{array}\right. with NN and DD open sets in Rd\Ω\mathbb{R}^d\backslash\Omega such that ND=N \cap D=\emptyset and ND=Rd\Ω\overline{N}\cup \overline{D}= \mathbb{R}^d \backslash\Omega, d>2sd>2s, λ>0\lambda> 0 and 0<p2s10<p\le 2_s^*-1, 2s=2dd2s2_s^*=\frac{2d}{d-2s}. We emphasize that the nonlinear term can be critical. The operators (Δ)s(-\Delta)^s , fractional laplacian, and Ns\mathcal{N}_{s}, nonlocal Neumann condition, are defined below in (1.5) and (1.6) respectively

    Weighted Energy-Dissipation principle for gradient flows in metric spaces

    Full text link
    This paper develops the so-called Weighted Energy-Dissipation (WED) variational approach for the analysis of gradient flows in metric spaces. This focuses on the minimization of the parameter-dependent global-in-time functional of trajectories \mathcal{I}_\varepsilon[u] = \int_0^{\infty} e^{-t/\varepsilon}\left( \frac12 |u'|^2(t) + \frac1{\varepsilon}\phi(u(t)) \right) \dd t, featuring the weighted sum of energetic and dissipative terms. As the parameter ε\varepsilon is sent to~00, the minimizers uεu_\varepsilon of such functionals converge, up to subsequences, to curves of maximal slope driven by the functional ϕ\phi. This delivers a new and general variational approximation procedure, hence a new existence proof, for metric gradient flows. In addition, it provides a novel perspective towards relaxation
    corecore