91,444 research outputs found

    The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows

    Full text link
    The Gauss--Newton with approximated tensors (GNAT) method is a nonlinear model reduction method that operates on fully discretized computational models. It achieves dimension reduction by a Petrov--Galerkin projection associated with residual minimization; it delivers computational efficency by a hyper-reduction procedure based on the `gappy POD' technique. Originally presented in Ref. [1], where it was applied to implicit nonlinear structural-dynamics models, this method is further developed here and applied to the solution of a benchmark turbulent viscous flow problem. To begin, this paper develops global state-space error bounds that justify the method's design and highlight its advantages in terms of minimizing components of these error bounds. Next, the paper introduces a `sample mesh' concept that enables a distributed, computationally efficient implementation of the GNAT method in finite-volume-based computational-fluid-dynamics (CFD) codes. The suitability of GNAT for parameterized problems is highlighted with the solution of an academic problem featuring moving discontinuities. Finally, the capability of this method to reduce by orders of magnitude the core-hours required for large-scale CFD computations, while preserving accuracy, is demonstrated with the simulation of turbulent flow over the Ahmed body. For an instance of this benchmark problem with over 17 million degrees of freedom, GNAT outperforms several other nonlinear model-reduction methods, reduces the required computational resources by more than two orders of magnitude, and delivers a solution that differs by less than 1% from its high-dimensional counterpart

    An experimental study of nonlinear dynamic system identification

    Get PDF
    A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length

    Least-squares inversion for density-matrix reconstruction

    Get PDF
    We propose a method for reconstruction of the density matrix from measurable time-dependent (probability) distributions of physical quantities. The applicability of the method based on least-squares inversion is - compared with other methods - very universal. It can be used to reconstruct quantum states of various systems, such as harmonic and and anharmonic oscillators including molecular vibrations in vibronic transitions and damped motion. It also enables one to take into account various specific features of experiments, such as limited sets of data and data smearing owing to limited resolution. To illustrate the method, we consider a Morse oscillator and give a comparison with other state-reconstruction methods suggested recently.Comment: 16 pages, REVTeX, 6 PS figures include

    Estimation of secondary soil properties by fusion of laboratory and on-line measured vis-NIR spectra

    Get PDF
    Visible and near infrared (vis-NIR) diffuse reflectance spectroscopy has made invaluable contributions to the accurate estimation of soil properties having direct and indirect spectral responses in NIR spectroscopy with measurements made in laboratory, in situ or using on-line (while the sensor is moving) platforms. Measurement accuracies vary with measurement type, for example, accuracy is higher for laboratory than on-line modes. On-line measurement accuracy deteriorates further for secondary (having indirect spectral response) soil properties. Therefore, the aim of this study is to improve on-line measurement accuracy of secondary properties by fusion of laboratory and on-line scanned spectra. Six arable fields were scanned using an on-line sensing platform coupled with a vis-NIR spectrophotometer (CompactSpec by Tec5 Technology for spectroscopy, Germany), with a spectral range of 305-1700 nm. A total of 138 soil samples were collected and used to develop five calibration models: (i) standard, using 100 laboratory scanned samples; (ii) hybrid-1, using 75 laboratory and 25 on-line samples; (iii) hybrid-2, using 50 laboratory and 50 on-line samples; (iv) hybrid-3, using 25 laboratory and 75 on-line samples, and (v) real-time using 100 on-line samples. Partial least squares regression (PLSR) models were developed for soil pH, available potassium (K), magnesium (Mg), calcium (Ca), and sodium (Na) and quality of models were validated using an independent prediction dataset (38 samples). Validation results showed that the standard models with laboratory scanned spectra provided poor to moderate accuracy for on-line prediction, and the hybrid-3 and real-time models provided the best prediction results, although hybrid-2 model with 50% on-line spectra provided equally good results for all properties except for pH and Na. These results suggest that either the real-time model with exclusively on-line spectra or the hybrid model with fusion up to 50% (except for pH and Na) and 75% on-line scanned spectra allows significant improvement of on-line prediction accuracy for secondary soil properties using vis-NIR spectroscopy

    On the Reliability of Cross Correlation Function Lag Determinations in Active Galactic Nuclei

    Full text link
    Many AGN exhibit a highly variable luminosity. Some AGN also show a pronounced time delay between variations seen in their optical continuum and in their emission lines. In effect, the emission lines are light echoes of the continuum. This light travel-time delay provides a characteristic radius of the region producing the emission lines. The cross correlation function (CCF) is the standard tool used to measure the time lag between the continuum and line variations. For the few well-sampled AGN, the lag ranges from 1-100 days, depending upon which line is used and the luminosity of the AGN. In the best sampled AGN, NGC 5548, the H_beta lag shows year-to-year changes, ranging from about 8.7 days to about 22.9 days over a span of 8 years. In this paper it is demonstrated that, in the context of AGN variability studies, the lag estimate using the CCF is biased too low and subject to a large variance. Thus the year-to-year changes of the measured lag in NGC 5548 do not necessarily imply changes in the AGN structure. The bias and large variance are consequences of finite duration sampling and the dominance of long timescale trends in the light curves, not due to noise or irregular sampling. Lag estimates can be substantially improved by removing low frequency power from the light curves prior to computing the CCF.Comment: To appear in the PASP, vol 111, 1999 Nov; 37 pages; 10 figure

    Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks

    Get PDF
    We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.Comment: The first two authors contributed equally to this wor

    Toward High-Precision Astrometry with WFPC2. I. Deriving an Accurate PSF

    Full text link
    The first step toward doing high-precision astrometry is the measurement of individual stars in individual images, a step that is fraught with dangers when the images are undersampled. The key to avoiding systematic positional error in undersampled images is to determine an extremely accurate point-spread function (PSF). We apply the concept of the {\it effective} PSF, and show that in images that consist of pixels it is the ePSF, rather than the often-used instrumental PSF, that embodies the information from which accurate star positions and magnitudes can be derived. We show how, in a rich star field, one can use the information from dithered exposures to derive an extremely accurate effective PSF by iterating between the PSF itself and the star positions that we measure with it. We also give a simple but effective procedure for representing spatial variations of the HST PSF. With such attention to the PSF, we find that we are able to measure the position of a single reasonably bright star in a single image with a precision of 0.02 pixel (2 mas in WF frames, 1 mas in PC), but with a systematic accuracy better than 0.002 pixel (0.2 mas in WF, 0.1 mas in PC), so that multiple observations can reliably be combined to improve the accuracy by √N\surd N.Comment: 33 pp. text + 15 figs.; accepted by PAS
    • 

    corecore