380,628 research outputs found

    An approximate theory for value sensitivity

    Get PDF
    The software engineering community is on a quest for general and specific theories for the discipline. Increasingly, systems constructed for today’s hyper connected world are raising issues of security and privacy, both examples of value concerns. Hence there is a need to articulate a theory for value sensitivity that software engineers can draw upon to evaluate their designs and to embed outcomes into systems that are developed. This paper proposes an approximate theory of value sensitivity in recognition that this is a journey and an interim struggle. The theory is articulated using the framework proposed by Sjøberg et al. An initial evaluation is provided for both the value sensitivity theory and the framework

    Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    Get PDF
    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n

    On convex lower-level black-box constraints in bilevel optimization with an application to gas market models with chance constraints

    Get PDF
    Bilevel optimization is an increasingly important tool to model hierarchical decision making. However, the ability of modeling such settings makes bilevel problems hard to solve in theory and practice. In this paper, we add on the general difficulty of this class of problems by further incorporating convex black-box constraints in the lower level. For this setup, we develop a cutting-plane algorithm that computes approximate bilevel-feasible points. We apply this method to a bilevel model of the European gas market in which we use a joint chance constraint to model uncertain loads. Since the chance constraint is not available in closed form, this fits into the black-box setting studied before. For the applied model, we use further problem-specific insights to derive bounds on the objective value of the bilevel problem. By doing so, we are able to show that we solve the application problem to approximate global optimality. In our numerical case study we are thus able to evaluate the welfare sensitivity in dependence of the achieved safety level of uncertain load coverage

    Radiative Electroweak Symmetry Breaking in a Little Higgs Model

    Full text link
    We present a new Little Higgs model, motivated by the deconstruction of a five-dimensional gauge-Higgs model. The approximate global symmetry is SO(5)0×SO(5)1SO(5)_0\times SO(5)_1, breaking to SO(5)SO(5), with a gauged subgroup of [SU(2)0L×U(1)0R]×O(4)1[SU(2)_{0L}\times U(1)_{0R}]\times O(4)_1, breaking to SU(2)L×U(1)YSU(2)_L \times U(1)_Y. Radiative corrections produce an additional small vacuum misalignment, breaking the electroweak symmetry down to U(1)EMU(1)_{EM}. Novel features of this model are: the only un-eaten pseudo-Goldstone boson in the effective theory is the Higgs boson; the model contains a custodial symmetry, which ensures that T^=0\hat{T}=0 at tree-level; and the potential for the Higgs boson is generated entirely through one-loop radiative corrections. A small negative mass-squared in the Higgs potential is obtained by a cancellation between the contribution of two heavy partners of the top quark, which is readily achieved over much of the parameter space. We can then obtain both a vacuum expectation value of v=246v=246 GeV and a light Higgs boson mass, which is strongly correlated with the masses of the two heavy top quark partners. For a scale of the global symmetry breaking of f=1f=1 TeV and using a single cutoff for the fermion loops, the Higgs boson mass satisfies 120 GeV ≲MH≲150\lesssim M_H\lesssim150 GeV over much of the range of parameter space. For ff raised to 10 TeV, these values increase by about 40 GeV. Effects at the ultraviolet cutoff scale may also raise the predicted values of the Higgs boson mass, but the model still favors MH≲200M_H\lesssim 200 GeV.Comment: 32 pages, 10 figures, JHEP style. Version accepted for publication in JHEP. Includes additional discussion of sensitivity to UV effects and fine-tuning, revised Fig. 9, added appendix and additional references

    Improved Perturbation Method and its Application to the IIB Matrix Model

    Full text link
    We present a new scheme for extracting approximate values in ``the improved perturbation method'', which is a sort of resummation technique capable of evaluating a series outside the radius of convergence. We employ the distribution profile of the series that is weighted by nth-order derivatives with respect to the artificially introduced parameters. By those weightings the distribution becomes more sensitive to the ``plateau'' structure in which the consistency condition of the method is satisfied. The scheme works effectively even in such cases that the system involves many parameters. We also propose that this scheme has to be applied to each observables separately and be analyzed comprehensively. We apply this scheme to the analysis of the IIB matrix model by the improved perturbation method obtained up to eighth order of perturbation in the former works. We consider here the possibility of spontaneous breakdown of Lorentz symmetry, and evaluate the free energy and the anisotropy of space-time extent. In the present analysis, we find an SO(10)-symmetric vacuum besides the SO(4)- and SO(7)-symmetric vacua that have been observed. It is also found that there are two distinct SO(4)-symmetric vacua that have almost the same value of free energy but the extent of space-time is different. From the approximate values of free energy, we conclude that the SO(4)-symmetric vacua are most preferred among those three types of vacua.Comment: 52 pages, published versio

    Approximate Models and Robust Decisions

    Full text link
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that "all models are wrong", but little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification. We then consider formal methods for decision making under model misspecification by quantifying stability of optimal actions to perturbations to the model within a neighbourhood of model space. This neighbourhood is defined in either one of two ways. Firstly, in a strong sense via an information (Kullback-Leibler) divergence around the approximating model. Or using a nonparametric model extension, again centred at the approximating model, in order to `average out' over possible misspecifications. This is presented in the context of recent work in the robust control, macroeconomics and financial mathematics literature. We adopt a Bayesian approach throughout although the methods are agnostic to this position

    Reconstructing Loads in Nanoplates from Dynamic Data

    Get PDF
    It was recently proved that the knowledge of the transverse displacement of a nanoplate in an open subset of its mid-plane, measured for any interval of time, allows for the unique determination of the spatial components (Formula presented.) of the transverse load (Formula presented.), where (Formula presented.) and (Formula presented.) is a known set of linearly independent functions of the time variable. The nanoplate mechanical model is built within the strain gradient linear elasticity theory, according to the Kirchhoff–Love kinematic assumptions. In this paper, we derive a reconstruction algorithm for the above inverse source problem, and we implement a numerical procedure based on a finite element spatial discretization to approximate the loads (Formula presented.). The computations are developed for a uniform rectangular nanoplate clamped at the boundary. The sensitivity of the results with respect to the main parameters that influence the identification is analyzed in detail. The adoption of a regularization scheme based on the singular value decomposition turns out to be decisive for the accuracy and stability of the reconstruction
    • …
    corecore