1,108 research outputs found

    Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    Get PDF
    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems

    Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    Get PDF
    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods

    A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data

    Get PDF
    A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition

    Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    Get PDF
    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data

    An analysis of uncertainties and skill in forecasts of winter storm losses

    Get PDF
    This paper describes an approach to derive probabilistic predictions of local winter storm damage occurrences from a global medium-range ensemble prediction system (EPS). Predictions of storm damage occurrences are subject to large uncertainty due to meteorological forecast uncertainty (typically addressed by means of ensemble predictions) and uncertainties in modelling weather impacts. The latter uncertainty arises from the fact that local vulnerabilities are not known in sufficient detail to allow for a deterministic prediction of damages, even if the forecasted gust wind speed contains no uncertainty. Thus, to estimate the damage model uncertainty, a statistical model based on logistic regression analysis is employed, relating meteorological analyses to historical damage records. A quantification of the two individual contributions (meteorological and damage model uncertainty) to the total forecast uncertainty is achieved by neglecting individual uncertainty sources and analysing resulting predictions. Results show an increase in forecast skill measured by means of a reduced Brier score if both meteorological and damage model uncertainties are taken into account. It is demonstrated that skilful predictions on district level (dividing the area of Germany into 439 administrative districts) are possible on lead times of several days. Skill is increased through the application of a proper ensemble calibration method, extending the range of lead times for which skilful damage predictions can be made

    Proving equivalence between imperative and MapReduce implementations using program transformations

    Get PDF
    Distributed programs are often formulated in popular functional frameworks like MapReduce, Spark and Thrill, but writing efficient algorithms for such frameworks is usually a non-trivial task. As the costs of running faulty algorithms at scale can be severe, it is highly desirable to verify their correctness. We propose to employ existing imperative reference implementations as specifications for MapReduce implementations. To this end, we present a novel verification approach in which equivalence between an imperative and a MapReduce implementation is established by a series of program transformations. In this paper, we present how the equivalence framework can be used to prove equivalence between an imperative implementation of the PageRank algorithm and its MapReduce variant. The eight individual transformation steps are individually presented and explained

    Rugate filter for light-trapping in solar cells

    Get PDF
    We suggest a design for a coating that could be applied on top of any solar cell having at least one diffusing surface. This coating acts as an angle and wavelength selective filter, which increases the average path length and absorptance at long wavelengths without altering the solar cell performance at short wavelengths. The filter design is based on a continuous variation of the refractive index in order to minimize undesired reflection losses. Numerical procedures are used to optimize the filter for a 10 μm thick monocrystalline silicon solar cell, which lifts the efficiency above the Auger limit for unconcentrated illumination. The feasibility to fabricate such filters is also discussed, considering a finite available refractive index range

    Local Density of States at Metal-Semiconductor Interfaces: An Atomic Scale Study

    Get PDF
    We investigate low temperature grown, abrupt, epitaxial, nonintermixed, defect-free n-type and p-type Fe/GaAs(110) interfaces by cross-sectional scanning tunneling microscopy and spectroscopy with atomic resolution. The probed local density of states shows that a model of the ideal metal-semiconductor interface requires a combination of metal-induced gap states and bond polarization at the interface which is nicely corroborated by density functional calculations. A three-dimensional finite element model of the space charge region yields a precise value for the Schottky barrier height
    • …
    corecore