9,360 research outputs found

    Crystal growth and furnace analysis

    Get PDF
    A thermal analysis of Hg/Cd/Te solidification in a Bridgman cell is made using Continuum's VAST code. The energy equation is solved in an axisymmetric, quasi-steady domain for both the molten and solid alloy regions. Alloy composition is calculated by a simplified one-dimensional model to estimate its effect on melt thermal conductivity and, consequently, on the temperature field within the cell. Solidification is assumed to occur at a fixed temperature of 979 K. Simplified boundary conditions are included to model both the radiant and conductive heat exchange between the furnace walls and the alloy. Calculations are performed to show how the steady-state isotherms are affected by: the hot and cold furnace temperatures, boundary condition parameters, and the growth rate which affects the calculated alloy's composition. The Advanced Automatic Directional Solidification Furnace (AADSF), developed by NASA, is also thermally analyzed using the CINDA code. The objective is to determine the performance and the overall power requirements for different furnace designs

    Exploiting network topology for large-scale inference of nonlinear reaction models

    Full text link
    The development of chemical reaction models aids understanding and prediction in areas ranging from biology to electrochemistry and combustion. A systematic approach to building reaction network models uses observational data not only to estimate unknown parameters, but also to learn model structure. Bayesian inference provides a natural approach to this data-driven construction of models. Yet traditional Bayesian model inference methodologies that numerically evaluate the evidence for each model are often infeasible for nonlinear reaction network inference, as the number of plausible models can be combinatorially large. Alternative approaches based on model-space sampling can enable large-scale network inference, but their realization presents many challenges. In this paper, we present new computational methods that make large-scale nonlinear network inference tractable. First, we exploit the topology of networks describing potential interactions among chemical species to design improved "between-model" proposals for reversible-jump Markov chain Monte Carlo. Second, we introduce a sensitivity-based determination of move types which, when combined with network-aware proposals, yields significant additional gains in sampling performance. These algorithms are demonstrated on inference problems drawn from systems biology, with nonlinear differential equation models of species interactions

    Concurrent π\pi-vector fields and energy beta-change

    Full text link
    The present paper deals with an \emph{intrinsic} investigation of the notion of a concurrent π\pi-vector field on the pullback bundle of a Finsler manifold (M,L)(M,L). The effect of the existence of a concurrent π\pi-vector field on some important special Finsler spaces is studied. An intrinsic investigation of a particular β\beta-change, namely the energy β\beta-change ($\widetilde{L}^{2}(x,y)=L^{2}(x,y)+ B^{2}(x,y) with \ B:=g(\bar{\zeta},\bar{\eta});; \bar{\zeta} beingaconcurrent being a concurrent \pivectorfield),isestablished.TherelationbetweenthetwoBarthelconnections-vector field), is established. The relation between the two Barthel connections \Gammaand and \widetilde{\Gamma},correspondingtothischange,isfound.Thisrelation,togetherwiththefactthattheCartanandtheBarthelconnectionshavethesamehorizontalandverticalprojectors,enableustostudytheenergy, corresponding to this change, is found. This relation, together with the fact that the Cartan and the Barthel connections have the same horizontal and vertical projectors, enable us to study the energy \beta$-change of the fundamental linear connection in Finsler geometry: the Cartan connection, the Berwald connection, the Chern connection and the Hashiguchi connection. Moreover, the change of their curvature tensors is concluded. It should be pointed out that the present work is formulated in a prospective modern coordinate-free form.Comment: 27 pages, LaTex file, Some typographical errors corrected, Some formulas simpifie

    Strategic tensions within the smartphones industry: the case of BlackBerry

    Get PDF
    This paper reviews some aspects of corporate strategy in a well-known smart phone provider. Two approaches to strategy are analysed: one concerning the industry and the other related to the organization. A general introduction on the smart phones industry is given followed by specific background on BlackBerry. Two perspectives are explored: the first talks about the paradox of compliance and choice within the industry and the second discusses the paradox of control and chaos in BlackBerry. The paper concludes with a brief overview on the company performance from 2006 to 2012 leading to some recommendations

    Efficient Localization of Discontinuities in Complex Computational Simulations

    Full text link
    Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches

    Data-Driven Model Reduction for the Bayesian Solution of Inverse Problems

    Get PDF
    One of the major challenges in the Bayesian solution of inverse problems governed by partial differential equations (PDEs) is the computational cost of repeatedly evaluating numerical PDE models, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. This paper proposes a data-driven projection-based model reduction technique to reduce this computational cost. The proposed technique has two distinctive features. First, the model reduction strategy is tailored to inverse problems: the snapshots used to construct the reduced-order model are computed adaptively from the posterior distribution. Posterior exploration and model reduction are thus pursued simultaneously. Second, to avoid repeated evaluations of the full-scale numerical model as in a standard MCMC method, we couple the full-scale model and the reduced-order model together in the MCMC algorithm. This maintains accurate inference while reducing its overall computational cost. In numerical experiments considering steady-state flow in a porous medium, the data-driven reduced-order model achieves better accuracy than a reduced-order model constructed using the classical approach. It also improves posterior sampling efficiency by several orders of magnitude compared to a standard MCMC method

    A continuous analogue of the tensor-train decomposition

    Full text link
    We develop new approximation algorithms and data structures for representing and computing with multivariate functions using the functional tensor-train (FT), a continuous extension of the tensor-train (TT) decomposition. The FT represents functions using a tensor-train ansatz by replacing the three-dimensional TT cores with univariate matrix-valued functions. The main contribution of this paper is a framework to compute the FT that employs adaptive approximations of univariate fibers, and that is not tied to any tensorized discretization. The algorithm can be coupled with any univariate linear or nonlinear approximation procedure. We demonstrate that this approach can generate multivariate function approximations that are several orders of magnitude more accurate, for the same cost, than those based on the conventional approach of compressing the coefficient tensor of a tensor-product basis. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. To support these developments, we describe continuous versions of an approximate maximum-volume cross approximation algorithm and of a rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate that our technique improves accuracy and robustness, compared to TT and quantics-TT approaches with fixed parameterizations, of high-dimensional integration, differentiation, and approximation of functions with local features such as discontinuities and other nonlinearities
    corecore