9,405 research outputs found

    Formalizing Mathematical Knowledge as a Biform Theory Graph: A Case Study

    Full text link
    A biform theory is a combination of an axiomatic theory and an algorithmic theory that supports the integration of reasoning and computation. These are ideal for formalizing algorithms that manipulate mathematical expressions. A theory graph is a network of theories connected by meaning-preserving theory morphisms that map the formulas of one theory to the formulas of another theory. Theory graphs are in turn well suited for formalizing mathematical knowledge at the most convenient level of abstraction using the most convenient vocabulary. We are interested in the problem of whether a body of mathematical knowledge can be effectively formalized as a theory graph of biform theories. As a test case, we look at the graph of theories encoding natural number arithmetic. We used two different formalisms to do this, which we describe and compare. The first is realized in CTTuqe{\rm CTT}_{\rm uqe}, a version of Church's type theory with quotation and evaluation, and the second is realized in Agda, a dependently typed programming language.Comment: 43 pages; published without appendices in: H. Geuvers et al., eds, Intelligent Computer Mathematics (CICM 2017), Lecture Notes in Computer Science, Vol. 10383, pp. 9-24, Springer, 201

    Simulation of solidification in a Bridgman cell

    Get PDF
    Bridgman-type crystal growth techniques are attractive methods for producing homogeneous, high-quality infrared detector and junction device materials. However, crystal imperfections and interface shapes still must be controlled through modification of the temperature and concentration gradients created during solidification. The objective of this investigation was to study the temperature fields generated by various cell and heatpipe configurations and operating conditions. Continuum's numerical model of the temperature, species concentrations, and velocity fields was used to describe the thermal characteristics of Bridgman cell operation

    Sequential inverse problems Bayesian principles and the\ud logistic map example

    Get PDF
    Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection

    Measurement of spray combustion processes

    Get PDF
    A free jet configuration was chosen for measuring noncombusting spray fields and hydrocarbon-air spray flames in an effort to develop computational models of the dynamic interaction between droplets and the gas phase and to verify and refine numerical models of the entire spray combustion process. The development of a spray combustion facility is described including techniques for laser measurements in spray combustion environments and methods for data acquisition, processing, displaying, and interpretation

    Data assimilation using bayesian filters and B-spline geological models

    Get PDF
    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir

    Prediction of transonic flutter for a supercritical wing by modified strip analysis and comparison with experiment

    Get PDF
    Use of a supercritical airfoil can adversely affect wing flutter speeds in the transonic range. As adequate theories for three dimensional unsteady transonic flow are not yet available, the modified strip analysis was used to predict the transonic flutter boundary for the supercritical wing. The steady state spanwise distributions of section lift curve slope and aerodynamic center, required as input for the flutter calculations, were obtained from pressure distributions. The calculated flutter boundary is in agreement with experiment in the subsonic range. In the transonic range, a transonic bucket is calculated which closely resembles the experimental one with regard to both shape and depth, but it occurs at about 0.04 Mach number lower than the experimental one

    Adding value and meaning to coheating tests

    Get PDF
    Purpose: The coheating test is the standard method of measuring the heat loss coefficient of a building, but to be useful the test requires careful and thoughtful execution. Testing should take place in the context of additional investigations in order to achieve a good understanding of the building and a qualitative and (if possible) quantitative understanding of the reasons for any performance shortfall. The paper aims to discuss these issues. Design/methodology/approach: Leeds Metropolitan University has more than 20 years of experience in coheating testing. This experience is drawn upon to discuss practical factors which can affect the outcome, together with supporting tests and investigations which are often necessary in order to fully understand the results. Findings: If testing is approached using coheating as part of a suite of investigations, a much deeper understanding of the test building results. In some cases it is possible to identify and quantify the contributions of different factors which result in an overall performance shortfall. Practical implications: Although it is not practicable to use a fully investigative approach for large scale routine quality assurance, it is extremely useful for purposes such as validating other testing procedures, in-depth study of prototypes or detailed investigations where problems are known to exist. Social implications: Successful building performance testing is a vital tool to achieve energy saving targets. Originality/value: The approach discussed clarifies some of the technical pitfalls which may be encountered in the execution of coheating tests and points to ways in which the maximum value can be extracted from the test period, leading to a meaningful analysis of the building's overall thermal performance
    corecore