16 research outputs found

    Parameter Estimation via Conditional Expectation --- A Bayesian Inversion

    Get PDF
    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp.\ functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes's theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations

    Tensor approximation methods for stochastic problems

    Get PDF
    Spektrale stochastische Methoden haben sich als effizientes Werkzeug zur Modellierung von Systemen mit Unsicherheiten etabliert. Der Vorteil dieser Methoden ist, dass sie nicht nur Statistiken liefern, sondern auch eine direkte Darstellung der Lösung als sogenanntes Surrogatmodell. Besonders attraktiv für elliptische stochastische partielle Differentialgleichungen (SPDGln) ist das stochastische Galerkin Verfahren, da in diesem wesentliche Eigenschaften des Differentialoperators erhalten bleiben. Ein Nachteil der Methode ist jedoch, dass enorme Mengen an Speicherplatz benötigt werden, da die Lösung in einem Tensorprodukt der räumlichen und stochastischen Ansatzräume liegt. Bisher wurden verschiedene Ansätze erprobt, um diese Anforderung zu verringern. Hierzu zählen Modellreduktionstechniken, Unterraumiterationen, um den Lösungsraum auf einen beherrschbaren Unterraum einzuschränken, oder Methoden, welche die Lösung schrittweise aus Rang-1 Produkten aufzubauen. In der vorliegenden Arbeit werden Bestapproximationen der Lösungen linearer SPDGln als Niedrig-Rang-Darstellungen gesucht. Dies wird dadurch erreicht, dass Tensordarstellungen sowohl für die Eingangsdaten als auch für die Lösung verwendet und während des ganzen iterativen Lösungsprozesses beibehalten werden. Da diese Darstellungen weitere Näherungen während des Lösungsprozesses erfordern, ist es wesentlich die Konvergenz der Lösung genau zu überwachen. Ferner müssen Besonderheiten der Präkonditionierung der diskreten Systeme und der Stagnation der iterativen Verfahren beachtet werden. Mit dem Ziel der praktischen Anwendbarkeit als einem wesentlichen Bestandteil dieser Arbeit wurde großer Wert auf eine detaillierte Beschreibung der Implementierungstechniken gelegt.Spectral stochastic methods have gained wide acceptance as a tool for efficient modelling of uncertain stochastic systems. The advantage of those methods is that they provide not only statistics, but give a direct representation of the measure of the solution as a so-called surrogate model, which can be used for very fast sampling. Especially attractive for elliptic stochastic partial differential equations (SPDEs) is the stochastic Galerkin method, since it preserves essential properties of the differential operator. One drawback of the method is, however, that it requires huge amounts of memory, as the solution is represented in a tensor product space of spatial and stochastic basis functions. Different approaches have been investigated to reduce the memory requirements, for example, model reduction techniques using subspace iterations to reduce the approximation space or methods of approximating the solution from successive rank-1 updates. In the present thesis best approximations to the solutions of linear elliptic SPDEs are constructed in low-rank tensor representations. By using tensor formats for all random quantities, the best subsets for representing the solution are computed “on the fly” during the entire process of solving the SPDE. As those representations require additional approximations during the solution process it is essential to control the convergence of the solution. Furthermore, special issues with preconditioning of the discrete system and stagnation of the iterative methods need adequate treatment. Since one goal of this work was practical usability, special emphasis has been given to implementation techniques and their description in the necessary detail

    Tensor approximation methods for stochastic problems

    Get PDF
    Spektrale stochastische Methoden haben sich als effizientes Werkzeug zur Modellierung von Systemen mit Unsicherheiten etabliert. Der Vorteil dieser Methoden ist, dass sie nicht nur Statistiken liefern, sondern auch eine direkte Darstellung der Lösung als sogenanntes Surrogatmodell. Besonders attraktiv für elliptische stochastische partielle Differentialgleichungen (SPDGln) ist das stochastische Galerkin Verfahren, da in diesem wesentliche Eigenschaften des Differentialoperators erhalten bleiben. Ein Nachteil der Methode ist jedoch, dass enorme Mengen an Speicherplatz benötigt werden, da die Lösung in einem Tensorprodukt der räumlichen und stochastischen Ansatzräume liegt. Bisher wurden verschiedene Ansätze erprobt, um diese Anforderung zu verringern. Hierzu zählen Modellreduktionstechniken, Unterraumiterationen, um den Lösungsraum auf einen beherrschbaren Unterraum einzuschränken, oder Methoden, welche die Lösung schrittweise aus Rang-1 Produkten aufzubauen. In der vorliegenden Arbeit werden Bestapproximationen der Lösungen linearer SPDGln als Niedrig-Rang-Darstellungen gesucht. Dies wird dadurch erreicht, dass Tensordarstellungen sowohl für die Eingangsdaten als auch für die Lösung verwendet und während des ganzen iterativen Lösungsprozesses beibehalten werden. Da diese Darstellungen weitere Näherungen während des Lösungsprozesses erfordern, ist es wesentlich die Konvergenz der Lösung genau zu überwachen. Ferner müssen Besonderheiten der Präkonditionierung der diskreten Systeme und der Stagnation der iterativen Verfahren beachtet werden. Mit dem Ziel der praktischen Anwendbarkeit als einem wesentlichen Bestandteil dieser Arbeit wurde großer Wert auf eine detaillierte Beschreibung der Implementierungstechniken gelegt.Spectral stochastic methods have gained wide acceptance as a tool for efficient modelling of uncertain stochastic systems. The advantage of those methods is that they provide not only statistics, but give a direct representation of the measure of the solution as a so-called surrogate model, which can be used for very fast sampling. Especially attractive for elliptic stochastic partial differential equations (SPDEs) is the stochastic Galerkin method, since it preserves essential properties of the differential operator. One drawback of the method is, however, that it requires huge amounts of memory, as the solution is represented in a tensor product space of spatial and stochastic basis functions. Different approaches have been investigated to reduce the memory requirements, for example, model reduction techniques using subspace iterations to reduce the approximation space or methods of approximating the solution from successive rank-1 updates. In the present thesis best approximations to the solutions of linear elliptic SPDEs are constructed in low-rank tensor representations. By using tensor formats for all random quantities, the best subsets for representing the solution are computed “on the fly” during the entire process of solving the SPDE. As those representations require additional approximations during the solution process it is essential to control the convergence of the solution. Furthermore, special issues with preconditioning of the discrete system and stagnation of the iterative methods need adequate treatment. Since one goal of this work was practical usability, special emphasis has been given to implementation techniques and their description in the necessary detail

    A convergent adaptive stochastic Galerkin finite element method with quasi-optimal spatial meshes

    Get PDF
    We analyze a-posteriori error estimation and adaptive refinement algorithms for stochastic Galerkin Finite Element methods for countably-parametric elliptic boundary value problems. A residual error estimator which separates the effects of gpc-Galerkin discretization in parameter space and of the Finite Element discretization in physical space in energy norm is established. It is proved that the adaptive algorithm converges, and to this end we establish a contraction property satisfied by its iterates. It is shown that the sequences of triangulations which are produced by the algorithm in the FE discretization of the active gpc coefficients are asymptotically optimal. Numerical experiments illustrate the theoretical results

    Developing an acceptable peer support intervention that enables clients, attending a weight management programme, to cascade their learning within their social network

    Get PDF
    Impacting on health and well-being, obesity creates an unmanageable burden on the health service and economy, yet is preventable and treatable. Establishing peer support as a tool for weight management could extend the reach of interventions and enhance their efficacy. A Narrative Systematic literature review highlights valuable peer support, yet also evidences that some peers are unhelpful. The aim of this research was to develop an intervention enabling clients of a weight management programme to cascade their learnings and experiential knowledge to those they know. Introducing a peer support intervention to clients and clients offering this to peers requires behaviour changes by lead facilitators and clients. Guided by the theoretical Behaviour Change Wheel (BCW) for designing behaviour change interventions, with Capability, Opportunity, Motivation for Behaviour (COM-B) at its centre, an iterative qualitative approach was undertaken. Using a prospective longitudinal design and maximum diversity sampling within the population attending three programmes, 21 clients attended semi-structured and some serial interviews; four focus groups were conducted with nine Leads. Thematic and interpretive analysis identified key themes. Motivated by altruistic benefits and seeing their peers’ readiness to change, Participants perceived they would be able to indirectly offer support without formal training or role however cues for these offers could be missed. These findings add new knowledge to the field of peer support. Acceptable support was praise, inclusion into and demonstration of weight-related activities, and encouragement. Practical dietary advice was welcomed but ‘norms’ of their social network take precedence over healthy goals. Giving time to peers and stress from hearing their problems, were barriers to offering support. Leads perceived the topic of peer support could be introduced once clients showed readiness to change. Based on theory and findings, an intervention manual, was developed using TIDieR guidance which requires further testing in the future

    A convergent adaptive stochastic Galerkin finite element method with quasi-optimal spatial meshes

    No full text
    We analyze a posteriori error estimation and adaptive refinement algorithms for stochastic Galerkin Finite Element methods for countably-parametric, elliptic boundary value problems. A residual error estimator which separates the effects of gpc-Galerkin discretization in parameter space and of the Finite Element discretization in physical space in energy norm is established. It is proved that the adaptive algorithm converges. To this end, a contraction property of its iterates is proved. It is shown that the sequences of triangulations which are produced by the algorithm in the FE discretization of the active gpc coefficients are asymptotically optimal. Numerical experiments illustrate the theoretical results

    Parametric and Uncertainty Computations with Tensor Product Representations

    Get PDF
    Part 2: UQ TheoryInternational audienceComputational uncertainty quantification in a probabilistic setting is a special case of a parametric problem. Parameter dependent state vectors lead via association to a linear operator to analogues of covariance, its spectral decomposition, and the associated Karhunen-Loève expansion. From this one obtains a generalised tensor representation The parameter in question may be a tuple of numbers, a function, a stochastic process, or a random tensor field. The tensor factorisation may be cascaded, leading to tensors of higher degree. When carried on a discretised level, such factorisations in the form of low-rank approximations lead to very sparse representations of the high dimensional quantities involved. Updating of uncertainty for new information is an important part of uncertainty quantification. Formulated in terms or random variables instead of measures, the Bayesian update is a projection and allows the use of the tensor factorisations also in this case
    corecore