22,061 research outputs found

    Identification of flexible structures for robust control

    Get PDF
    Documentation is provided of the authors' experience with modeling and identification of an experimental flexible structure for the purpose of control design, with the primary aim being to motivate some important research directions in this area. A multi-input/multi-output (MIMO) model of the structure is generated using the finite element method. This model is inadequate for control design, due to its large variation from the experimental data. Chebyshev polynomials are employed to fit the data with single-input/multi-output (SIMO) transfer function models. Combining these SIMO models leads to a MIMO model with more modes than the original finite element model. To find a physically motivated model, an ad hoc model reduction technique which uses a priori knowledge of the structure is developed. The ad hoc approach is compared with balanced realization model reduction to determine its benefits. Descriptions of the errors between the model and experimental data are formulated for robust control design. Plots of select transfer function models and experimental data are included

    An Improved Method for 21cm Foreground Removal

    Get PDF
    21 cm tomography is expected to be difficult in part because of serious foreground contamination. Previous studies have found that line-of-sight approaches are capable of cleaning foregrounds to an acceptable level on large spatial scales, but not on small spatial scales. In this paper, we introduce a Fourier-space formalism for describing the line-of-sight methods, and use it to introduce an improved new method for 21 cm foreground cleaning. Heuristically, this method involves fitting foregrounds in Fourier space using weighted polynomial fits, with each pixel weighted according to its information content. We show that the new method reproduces the old one on large angular scales, and gives marked improvements on small scales at essentially no extra computational cost.Comment: 6 pages, 5 figures, replaced to match accepted MNRAS versio

    The Panchromatic High-Resolution Spectroscopic Survey of Local Group Star Clusters - I. General Data Reduction Procedures for the VLT/X-shooter UVB and VIS arm

    Get PDF
    Our dataset contains spectroscopic observations of 29 globular clusters in the Magellanic Clouds and the Milky Way performed with VLT/X-shooter. Here we present detailed data reduction procedures for the VLT/X-shooter UVB and VIS arm. These are not restricted to our particular dataset, but are generally applicable to different kinds of X-shooter data without major limitation on the astronomical object of interest. ESO's X-shooter pipeline (v1.5.0) performs well and reliably for the wavelength calibration and the associated rectification procedure, yet we find several weaknesses in the reduction cascade that are addressed with additional calibration steps, such as bad pixel interpolation, flat fielding, and slit illumination corrections. Furthermore, the instrumental PSF is analytically modeled and used to reconstruct flux losses at slit transit and for optimally extracting point sources. Regular observations of spectrophotometric standard stars allow us to detect instrumental variability, which needs to be understood if a reliable absolute flux calibration is desired. A cascade of additional custom calibration steps is presented that allows for an absolute flux calibration uncertainty of less than ten percent under virtually every observational setup provided that the signal-to-noise ratio is sufficiently high. The optimal extraction increases the signal-to-noise ratio typically by a factor of 1.5, while simultaneously correcting for resulting flux losses. The wavelength calibration is found to be accurate to an uncertainty level of approximately 0.02 Angstrom. We find that most of the X-shooter systematics can be reliably modeled and corrected for. This offers the possibility of comparing observations on different nights and with different telescope pointings and instrumental setups, thereby facilitating a robust statistical analysis of large datasets.Comment: 22 pages, 18 figures, Accepted for publication in Astronomy & Astrophysics; V2 contains a minor change in the abstract. We note that we did not test X-shooter pipeline versions 2.0 or later. V3 contains an updated referenc

    A Panoply of Cepheid Light Curve Templates

    Get PDF
    We have generated accurate V and I template light curves using a combination of Fourier decomposition and principal component analysis for a large sample of Cepheid light curves. Unlike previous studies, we include short period Cepheids and stars pulsating in the first overtone mode in our analysis. Extensive Monte Carlo simulations show that our templates can be used to precisely measure Cepheid magnitudes and periods, even in cases where there are few observational epochs. These templates are ideal for characterizing serendipitously discovered Cepheids and can be used in conjunction with surveys such as Pan-Starrs and LSST where the observational sampling may not be optimized for Cepheids.Comment: 12 pages, 14 figures. Accepted for publication in AJ fixed embarrassing typo

    Accelerating Computation of the Nonlinear Mass by an Order of Magnitude

    Get PDF
    The nonlinear mass is a characteristic scale in halo formation that has wide-ranging applications across cosmology. Naively, computing it requires repeated numerical integration to calculate the variance of the power spectrum on different scales and determine which scales exceed the threshold for nonlinear collapse. We accelerate this calculation by working in configuration space and approximating the correlation function as a polynomial at r <= 5 h1h^{-1} Mpc. This enables an analytic rather than numerical solution, accurate across a variety of cosmologies to 0.1-1% (depending on redshift) and 10-20 times faster than the naive numerical method. We also present a further acceleration (40-80 times faster than the naive method) in which we determine the polynomial coefficients using a Taylor expansion in the cosmological parameters rather than re-fitting a polynomial to the correlation function. Our acceleration greatly reduces the cost of repeated calculation of the nonlinear mass. This will be useful for MCMC analyses to constrain cosmological parameters from the highly nonlinear regime, e.g. with data from upcoming surveys

    On the Reliability of Cross Correlation Function Lag Determinations in Active Galactic Nuclei

    Full text link
    Many AGN exhibit a highly variable luminosity. Some AGN also show a pronounced time delay between variations seen in their optical continuum and in their emission lines. In effect, the emission lines are light echoes of the continuum. This light travel-time delay provides a characteristic radius of the region producing the emission lines. The cross correlation function (CCF) is the standard tool used to measure the time lag between the continuum and line variations. For the few well-sampled AGN, the lag ranges from 1-100 days, depending upon which line is used and the luminosity of the AGN. In the best sampled AGN, NGC 5548, the H_beta lag shows year-to-year changes, ranging from about 8.7 days to about 22.9 days over a span of 8 years. In this paper it is demonstrated that, in the context of AGN variability studies, the lag estimate using the CCF is biased too low and subject to a large variance. Thus the year-to-year changes of the measured lag in NGC 5548 do not necessarily imply changes in the AGN structure. The bias and large variance are consequences of finite duration sampling and the dominance of long timescale trends in the light curves, not due to noise or irregular sampling. Lag estimates can be substantially improved by removing low frequency power from the light curves prior to computing the CCF.Comment: To appear in the PASP, vol 111, 1999 Nov; 37 pages; 10 figure

    Twenty-one centimeter tomography with foregrounds

    Full text link
    Twenty-one centimeter tomography is emerging as a powerful tool to explore the end of the cosmic dark ages and the reionization epoch, but it will only be as good as our ability to accurately model and remove astrophysical foreground contamination. Previous treatments of this problem have focused on the angular structure of the signal and foregrounds and what can be achieved with limited spectral resolution (bandwidths in the 1 MHz range). In this paper we introduce and evaluate a ``blind'' method to extract the multifrequency 21cm signal by taking advantage of the smooth frequency structure of the Galactic and extragalactic foregrounds. We find that 21 cm tomography is typically limited by foregrounds on scales k1h/k\ll 1h/Mpc and limited by noise on scales k1h/k\gg 1h/Mpc, provided that the experimental bandwidth can be made substantially smaller than 0.1 MHz. Our results show that this approach is quite promising even for scenarios with rather extreme contamination from point sources and diffuse Galactic emission, which bodes well for upcoming experiments such as LOFAR, MWA, PAST, and SKA.Comment: 10 pages, 6 figures. Revised version including various cases with high noise level. Major conclusions unchanged. Accepted for publication in Ap

    The Clustering of the SDSS DR7 Main Galaxy Sample I: A 4 per cent Distance Measure at z=0.15

    Get PDF
    We create a sample of spectroscopically identified galaxies with z<0.2z < 0.2 from the Sloan Digital Sky Survey (SDSS) Data Release 7, covering 6813 deg2^2. Galaxies are chosen to sample the highest mass haloes, with an effective bias of 1.5, allowing us to construct 1000 mock galaxy catalogs (described in Paper II), which we use to estimate statistical errors and test our methods. We use an estimate of the gravitational potential to "reconstruct" the linear density fluctuations, enhancing the Baryon Acoustic Oscillation (BAO) signal in the measured correlation function and power spectrum. Fitting to these measurements, we determine DV(zeff=0.15)=(664±25)(rd/rd,fid)D_{V}(z_{\rm eff}=0.15) = (664\pm25)(r_d/r_{d,{\rm fid}}) Mpc; this is a better than 4 per cent distance measurement. This "fills the gap" in BAO distance ladder between previously measured local and higher redshift measurements, and affords significant improvement in constraining the properties of dark energy. Combining our measurement with other BAO measurements from BOSS and 6dFGS galaxy samples provides a 15 per cent improvement in the determination of the equation of state of dark energy and the value of the Hubble parameter at z=0z=0 (H0H_0). Our measurement is fully consistent with the Planck results and the Λ\LambdaCDM concordance cosmology, but increases the tension between Planck++BAO H0H_0 determinations and direct H0H_0 measurements.Comment: Accepted by MNRAS, distance likelihood is available in source file

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field
    corecore