432,000 research outputs found

    Space-time adaptive solution of inverse problems with the discrete adjoint method

    Get PDF
    Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the intergrid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided for the discontinuous Galerkin (DG) method. The adjoint model development is considerably simplified by decoupling the adaptive mesh refinement mechanism from the forward model solver, and by selectively applying automatic differentiation on individual algorithms. In forward models discontinuous Galerkin discretizations can efficiently handle high orders of accuracy, h/ph/p-refinement, and parallel computation. The analysis reveals that this approach, paired with Runge Kutta time stepping, is well suited for the adaptive solutions of inverse problems. The usefulness of discrete discontinuous Galerkin adjoints is illustrated on a two-dimensional adaptive data assimilation problem

    Mirror Descent and Convex Optimization Problems With Non-Smooth Inequality Constraints

    Full text link
    We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. We hope that it is convenient for a reader to have all the methods for different settings in one place. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. This means that neither stepsize nor stopping rule require to know the Lipschitz constant of the objective or constraint. We also construct Mirror Descent for problems with objective function, which is not Lipschitz continuous, e.g. is a quadratic function. Besides that, we address the problem of recovering the solution of the dual problem

    Scalable Methods for Adaptively Seeding a Social Network

    Full text link
    In recent years, social networking platforms have developed into extraordinary channels for spreading and consuming information. Along with the rise of such infrastructure, there is continuous progress on techniques for spreading information effectively through influential users. In many applications, one is restricted to select influencers from a set of users who engaged with the topic being promoted, and due to the structure of social networks, these users often rank low in terms of their influence potential. An alternative approach one can consider is an adaptive method which selects users in a manner which targets their influential neighbors. The advantage of such an approach is that it leverages the friendship paradox in social networks: while users are often not influential, they often know someone who is. Despite the various complexities in such optimization problems, we show that scalable adaptive seeding is achievable. In particular, we develop algorithms for linear influence models with provable approximation guarantees that can be gracefully parallelized. To show the effectiveness of our methods we collected data from various verticals social network users follow. For each vertical, we collected data on the users who responded to a certain post as well as their neighbors, and applied our methods on this data. Our experiments show that adaptive seeding is scalable, and importantly, that it obtains dramatic improvements over standard approaches of information dissemination.Comment: Full version of the paper appearing in WWW 201

    Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks

    Full text link
    We propose an adaptive diffusion mechanism to optimize a global cost function in a distributed manner over a network of nodes. The cost function is assumed to consist of a collection of individual components. Diffusion adaptation allows the nodes to cooperate and diffuse information in real-time; it also helps alleviate the effects of stochastic gradient noise and measurement noise through a continuous learning process. We analyze the mean-square-error performance of the algorithm in some detail, including its transient and steady-state behavior. We also apply the diffusion algorithm to two problems: distributed estimation with sparse parameters and distributed localization. Compared to well-studied incremental methods, diffusion methods do not require the use of a cyclic path over the nodes and are robust to node and link failure. Diffusion methods also endow networks with adaptation abilities that enable the individual nodes to continue learning even when the cost function changes with time. Examples involving such dynamic cost functions with moving targets are common in the context of biological networks.Comment: 34 pages, 6 figures, to appear in IEEE Transactions on Signal Processing, 201

    Polynomial Preserving Recovery For Weak Galerkin Methods And Their Applications

    Get PDF
    Gradient recovery technique is widely used to reconstruct a better numerical gradient from a finite element solution, for mesh smoothing, a posteriori error estimate and adaptive finite element methods. The PPR technique generates a higher order approximation of the gradient on a patch of mesh elements around each mesh vertex. It can be used for different finite element methods for different problems. This dissertation presents recovery techniques for the weak Galerkin methods and as well as applications of gradient recovery on various of problems, including elliptic problems, interface problems, and Stokes problems. Our first target is to develop a boundary strategy for the current PPR algorithm. The current accuracy of PPR near boundaries is not as good as that in the interior of the domain. It might be even worse than without recovery. Some special treatments are needed to improve the accuracy of PPR on the boundary. In this thesis, we present two boundary recovery strategies to resolve the problem caused by boundaries. Numerical experiments indicate that both of the newly proposed strategies made an improvement to the original PPR. Our second target is to generalize PPR to the weak Galerkin methods. Different from the standard finite element methods, the weak Galerkin methods use a different set of degrees of freedom. Instead of the weak gradient information, we are able to obtain the recovered gradient information for the numerical solution in the generalization of PPR. In the PPR process, we are also able to recover the function value at the nodal points which will produce a global continuous solution instead of piecewise continuous function. Our third target is to apply our proposed strategy and WGPPR to interface problems. We treat an interface as a boundary when performing gradient recovery, and the jump condition on the interface can be well captured by the function recovery process. In addition, adaptive methods based on WGPPR recovery type a posteriori error estimator is proposed and numerically tested in this thesis. Application on the elliptic problem and interface problem validate the effectiveness and robustness of our algorithm. Furthermore, WGPPR has been applied to 3D problem and Stokes problem as well. Superconvergent phenomenon is again observed

    Complexity Leadership: A Theorical Perspective

    Get PDF
    Complex systems are social networks composed of interactive employees interconnected through collaborative, dynamic ties such as shared goals, perspectives and needs. Complex systems are largely based on “the complex system theory”. The complex system theory focuses mainly on finding out and developing strategies and behaviours that foster continuous learning, resonating with new conditions and creativity in organizations with dynamic collaborative management mentality. Complex systems surely need leaders to manage complexity. Complexity leadership could be defined as adaptive mechanisms developed by complex organizations in new conditions required by the information age, rather than technical problems entailed by the industrial age. Complexity leadership is a joint, resultant product of the following three types of leadership: (1) administrative leadership based on strict control and a significant bureaucratic hierarchy (2) adaptive leadership fundamentally based on creative problem solving, resonating with new conditions and learning and (3) action-centered leadership that involves immediate decision-making mechanisms employed in crises and dynamic productivity. The study focuses on complexity leadership within the context of the complexity leadership theory

    Automating the deployment of componentized systems

    Get PDF
    Embedded and self-adaptive systems demand continuous adap- tation and reconfiguration activities based on changing quality condi- tions and context information. As a consequence, systems have to be (re)deployed several times and software components need to be mapped onto new or existing hardware pieces. Today, the way to determine an optimal deployment in complex systems, often performed at runtime, constitutes a well-known challenge. In this paper we highlight the major problems of automatic deployment and present a research plan to reach for an UML-based solution for the deployment of componentized sys- tems. As a first step towards a solution, we use the UML superstructure to suggest a way to redeploy UML component diagrams based on the inputs and outputs required to enact an automatic deployment process.ComisiĂłn Interministerial de Ciencia y TecnologĂ­a (CICYT) SETI (TIN2009-07366
    • …
    corecore