983,344 research outputs found

    An error-controlled methodology for approximate hierarchical symbolic analysis

    Get PDF
    Limitations of existing approaches for symbolic analysis of large analog circuits are discussed. To address their solution, a new methodology for hierarchical symbolic analysis is introduced. The combination of a hierarchical modeling technique and approximation strategies, comprising circuit reduction, graph-based symbolic solution of circuit equations and matrix-based error control, provides optimum results in terms of speech and quality of results.European Commission ESPRIT 21812Comisión Interministerial de Ciencia y Tecnología TIC97-058

    Aggregated functional data model for Near-Infrared Spectroscopy calibration and prediction

    Full text link
    Calibration and prediction for NIR spectroscopy data are performed based on a functional interpretation of the Beer-Lambert formula. Considering that, for each chemical sample, the resulting spectrum is a continuous curve obtained as the summation of overlapped absorption spectra from each analyte plus a Gaussian error, we assume that each individual spectrum can be expanded as a linear combination of B-splines basis. Calibration is then performed using two procedures for estimating the individual analytes curves: basis smoothing and smoothing splines. Prediction is done by minimizing the square error of prediction. To assess the variance of the predicted values, we use a leave-one-out jackknife technique. Departures from the standard error models are discussed through a simulation study, in particular, how correlated errors impact on the calibration step and consequently on the analytes' concentration prediction. Finally, the performance of our methodology is demonstrated through the analysis of two publicly available datasets.Comment: 27 pages, 7 figures, 7 table

    Reference-Plane Invariant Method for Measuring Electromagnetic Parameters of Materials

    Full text link
    This paper presents a simple and effective wideband method for the determination of material properties, such as the complex index of refraction and the complex permittivity and permeability. The method is explicit (non-iterative) and reference-plane invariant: it uses a certain combination of scattering parameters in conjunction with group-velocity data. This technique can be used to characterize both dielectric and magnetic materials. The proposed method is verified experimentally within a frequency range between 2 to 18 GHz on polytetrafluoroethylene and polyvinylchloride samples. A comprehensive error and stability analysis reveals that, similar to other methods based on transmission/reflection measurement, the uncertainties are larger at low frequencies and at the Fabry-Perot resonances.Comment: 12 pages, 21 figure

    Gravity field determination and error assessment techniques

    Get PDF
    Linear estimation theory, along with a new technique to compute relative data weights, was applied to the determination of the Earth's geopotential field and other geophysical model parameters using a combination of satellite ground-based tracking data, satellite altimetry data, and the surface gravimetry data. The relative data weights for the inhomogeneous data sets are estimated simultaneously with the gravity field and other geophysical and orbit parameters in a least squares approach to produce the University of Texas gravity field models. New techniques to perform calibration of the formal covariance matrix for the geopotential solution were developed to obtain a reliable gravity field error estimate. Different techniques, which include orbit residual analysis, surface gravity anomaly residual analysis, subset gravity solution comparisons and consider covariance analysis, were applied to investigate the reliability of the calibration

    Efficient computation of TM- and TE-polarized leaky modes in multilayered circular waveguides

    Get PDF
    In combination with the perfectly matched layer (PML)-paradigm, eigenmode expansion techniques have become increasingly important in the analysis and design of cylindrical and planar waveguides for photonics applications. To achieve high accuracy, these techniques rely on the determination of many modes of the modal spectrum of the waveguide under consideration. In this paper, we focus on the efficient computation of TM- and TE-polarized leaky modes for multilayered cylindrical waveguides. First, quasi-static estimates are derived for the propagation constants of these modes. Second, these estimates are used as a starting point in an advanced Newton iteration scheme after they have been subjected to an adaptive linear error correction. To prove the validity of the computation technique, it is applied to technologically important cases: vertical-cavity surface-emitting lasers and a monomode fiber

    Bench-to-bedside review: The importance of the precision of the reference technique in method comparison studies – with specific reference to the measurement of cardiac output

    Get PDF
    Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of ± 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of ± 20% or less. The combination of two precisions of ± 20% equates to a total error of ± 28.3%, which is commonly rounded up to ± 30%. Thus, finding a percentage error of less than ± 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique
    corecore