86,724 research outputs found

    The Conceptualisation, Measurement, and Coding of Education in German and Cross-National Surveys (Version 2.0)

    Get PDF
    This contribution provides an overview of the theoretical conceptualisation, empirical operationalisation, and the measurement and coding of education in national and international survey research. In this context, the term "education" refers to the level of education attained by an individual, which must be distinguished from concepts such as competencies, performance at school, and educational disciplines. Because education is often included in statistical models in a merely routine way, emphasis is placed on the connection between the theoretical concept, the indicator, the measurement instrument, and the variable. When doing so, use is made of long-standing social science research on educational returns and educational inequality. A distinction is made between linear, ordinal and categorical concepts of education that have emerged from the different theoretical approaches, and that, to some extent, impose different requirements on data collection and coding. Indeed, it is true to say that there is no consensus as to how education should be conceptualised, measured, compared across countries, and statistically modelled. The contribution does not give a recommendation for a "one size fits all" educational variable that would be appropriate for all studies. Rather, it endeavours to encourage readers to make an informed decision on the measurement of education in the respective educational context and to support them in doing so

    Photometric Redshifts with Surface Brightness Priors

    Full text link
    We use galaxy surface brightness as prior information to improve photometric redshift (photo-z) estimation. We apply our template-based photo-z method to imaging data from the ground-based VVDS survey and the space-based GOODS field from HST, and use spectroscopic redshifts to test our photometric redshifts for different galaxy types and redshifts. We find that the surface brightness prior eliminates a large fraction of outliers by lifting the degeneracy between the Lyman and 4000 Angstrom breaks. Bias and scatter are improved by about a factor of 2 with the prior for both the ground and space data. Ongoing and planned surveys from the ground and space will benefit, provided that care is taken in measurements of galaxy sizes and in the application of the prior. We discuss the image quality and signal-to-noise requirements that enable the surface brightness prior to be successfully applied.Comment: 15 pages, 13 figures, matches published versio

    T-PHOT: A new code for PSF-matched, prior-based, multiwavelength extragalactic deconfusion photometry

    Get PDF
    We present T-PHOT, a publicly available software aimed at extracting accurate photometry from low-resolution images of deep extragalactic fields, where the blending of sources can be a serious problem for the accurate and unbiased measurement of fluxes and colours. T-PHOT has been developed within the ASTRODEEP project and it can be considered as the next generation to TFIT, providing significant improvements above it and other similar codes. T-PHOT gathers data from a high-resolution image of a region of the sky, and uses it to obtain priors for the photometric analysis of a lower resolution image of the same field. It can handle different types of datasets as input priors: i) a list of objects that will be used to obtain cutouts from the real high-resolution image; ii) a set of analytical models; iii) a list of unresolved, point-like sources, useful e.g. for far-infrared wavelength domains. We show that T-PHOT yields accurate estimations of fluxes within the intrinsic uncertainties of the method, when systematic errors are taken into account (which can be done thanks to a flagging code given in the output). T-PHOT is many times faster than similar codes like TFIT and CONVPHOT (up to hundreds, depending on the problem and the method adopted), whilst at the same time being more robust and more versatile. This makes it an optimal choice for the analysis of large datasets. In addition we show how the use of different settings and methods significantly enhances the performance. Given its versatility and robustness, T-PHOT can be considered the preferred choice for combined photometric analysis of current and forthcoming extragalactic optical to far-infrared imaging surveys. [abridged]Comment: 23 pages, 20 figures, 2 table

    Application of the Iterated Weighted Least-Squares Fit to counting experiments

    Get PDF
    Least-squares fits are an important tool in many data analysis applications. In this paper, we review theoretical results, which are relevant for their application to data from counting experiments. Using a simple example, we illustrate the well known fact that commonly used variants of the least-squares fit applied to Poisson-distributed data produce biased estimates. The bias can be overcome with an iterated weighted least-squares method, which produces results identical to the maximum-likelihood method. For linear models, the iterated weighted least-squares method converges faster than the equivalent maximum-likelihood method, and does not require problem-specific starting values, which may be a practical advantage. The equivalence of both methods also holds for binomially distributed data. We further show that the unbinned maximum-likelihood method can be derived as a limiting case of the iterated least-squares fit when the bin width goes to zero, which demonstrates a deep connection between the two methods.Comment: Accepted by NIM

    SparsePak: A Formatted Fiber Field-Unit for The WIYN Telescope Bench Spectrograph. II. On-Sky Performance

    Full text link
    We present a performance analysis of SparsePak and the WIYN Bench Spectrograph for precision studies of stellar and ionized gas kinematics of external galaxies. We focus on spectrograph configurations with echelle and low-order gratings yielding spectral resolutions of ~10000 between 500-900nm. These configurations are of general relevance to the spectrograph performance. Benchmarks include spectral resolution, sampling, vignetting, scattered light, and an estimate of the system absolute throughput. Comparisons are made to other, existing, fiber feeds on the WIYN Bench Spectrograph. Vignetting and relative throughput are found to agree with a geometric model of the optical system. An aperture-correction protocol for spectrophotometric standard-star calibrations has been established using independent WIYN imaging data and the unique capabilities of the SparsePak fiber array. The WIYN point-spread-function is well-fit by a Moffat profile with a constant power-law outer slope of index -4.4. We use SparsePak commissioning data to debunk a long-standing myth concerning sky-subtraction with fibers: By properly treating the multi-fiber data as a ``long-slit'' it is possible to achieve precision sky subtraction with a signal-to-noise performance as good or better than conventional long-slit spectroscopy. No beam-switching is required, and hence the method is efficient. Finally, we give several examples of science measurements which SparsePak now makes routine. These include Hα\alpha velocity fields of low surface-brightness disks, gas and stellar velocity-fields of nearly face-on disks, and stellar absorption-line profiles of galaxy disks at spectral resolutions of ~24,000.Comment: To appear in ApJSupp (Feb 2005); 19 pages text; 7 tables; 27 figures (embedded); high-resolution version at http://www.astro.wisc.edu/~mab/publications/spkII_pre.pd

    Comparing Image Quality in Phase Contrast subÎĽ\mu X-Ray Tomography -- A Round-Robin Study

    Full text link
    How to evaluate and compare image quality from different sub-micrometer (subÎĽ\mu) CT scans? A simple test phantom made of polymer microbeads is used for recording projection images as well as 13 CT scans in a number of commercial and non-commercial scanners. From the resulting CT images, signal and noise power spectra are modeled for estimating volume signal-to-noise ratios (3D SNR spectra). Using the same CT images, a time- and shape-independent transfer function (MTF) is computed for each scan, including phase contrast effects and image blur (MTFblur\mathrm{MTF_{blur}}). The SNR spectra and MTF of the CT scans are compared to 2D SNR spectra of the projection images. In contrary to 2D SNR, volume SNR can be normalized with respect to the object's power spectrum, yielding detection effectiveness (DE) a new measure which reveals how technical differences as well as operator-choices strongly influence scan quality for a given measurement time. Using DE, both source-based and detector-based subÎĽ\mu CT scanners can be studied and their scan quality can be compared. Future application of this work requires a particular scan acquisition scheme which will allow for measuring 3D signal-to-noise ratios, making the model fit for 3D noise power spectra obsolete

    Constraining the Mass Profiles of Stellar Systems: Schwarzschild Modeling of Discrete Velocity Datasets

    Full text link
    (ABRIDGED) We present a new Schwarzschild orbit-superposition code designed to model discrete datasets composed of velocities of individual kinematic tracers in a dynamical system. This constitutes an extension of previous implementations that can only address continuous data in the form of (the moments of) velocity distributions, thus avoiding potentially important losses of information due to data binning. Furthermore, the code can handle any combination of available velocity components, i.e., only line-of-sight velocities, only proper motions, or a combination of both. It can also handle a combination of discrete and continuous data. The code finds the distribution function (DF, a function of the three integrals of motion E, Lz, and I3) that best reproduces the available kinematic and photometric observations in a given axisymmetric gravitational potential. The fully numerical approach ensures considerable freedom on the form of the DF f(E,Lz,I3). This allows a very general modeling of the orbital structure, thus avoiding restrictive assumptions about the degree of (an)isotropy of the orbits. We describe the implementation of the discrete code and present a series of tests of its performance based on the modeling of simulated datasets generated from a known DF. We find that the discrete Schwarzschild code recovers the original orbital structure, M/L ratios, and inclination of the input datasets to satisfactory accuracy, as quantified by various statistics. The code will be valuable, e.g., for modeling stellar motions in Galactic globular clusters, and those of individual stars, planetary nebulae, or globular clusters in nearby galaxies. This can shed new light on the total mass distributions of these systems, with central black holes and dark matter halos being of particular interest.Comment: ApJ, in press; 51 pages, 11 figures; manuscript revised following comments by refere

    The Hyper Suprime-Cam Software Pipeline

    Full text link
    In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.Comment: 39 pages, 21 figures, 2 tables. Submitted to Publications of the Astronomical Society of Japa
    • …
    corecore