941 research outputs found
Weak Gravitational Lensing by a Sample of X-Ray Luminous Clusters of Galaxies -- II. Comparison with Virial Masses
Dynamic velocity dispersion and mass estimates are given for a sample of five
X-ray luminous rich clusters of galaxies at intermediate redshifts (z~0.3)
drawn from a sample of 39 clusters for which we have obtained gravitational
lens mass estimates. The velocity dispersions are determined from between 9 and
20 redshifts measured with the LDSS spectrograph of the William Herschel
Telescope, and virial radii are determined from imaging using the UH8K mosaic
CCD camera on the University of Hawaii 2.24m telescope.
Including clusters with velocity dispersions taken from the literature, we
have velocity dispersion estimates for 12 clusters in our gravitational lensing
sample. For this sample we compare the dynamical velocity dispersion estimates
with our estimates of the velocity dispersions made from gravitational lensing
by fitting a singular isothermal sphere profile to the observed tangential weak
lensing distortion as a function of radius. In all but two clusters, we find a
good agreement between the velocity dispersion estimates based on spectroscopy
and on weak lensing.Comment: 9 pages, 4 figures, accepted for publication in ApJ. Version in
emulateapj format with only minor change
Engaging students in scenario-based assessment for final exams
We present our approaches to enhancing the authenticity of final exams across large first-year first semester biology units of cohort sizes between 300-1200 students. Historically exams were primarily used as an instrument that mainly assessed knowledge retention with limited provision of feedback to students. The necessity to shift to online learning during the height of the COVID-19 pandemic provided us with a challenging, yet opportune moment to transform our final examinations into an authentic learning experience for undergraduate biology students. We placed a large focus on integrating scenario-based questions in the final exam thereby assessing students’ ability to apply knowledge to real-world contexts. To enhance engagement with the assessment, we also provided personalised feedback for each student. With additional challenges around access to artificial intelligence and academic integrity, we share our experiences returning to in-person final examinations and evaluate the relevancy and benefits of scenario-based questions for student assessment and learning. We also share our approaches to feedforwarding initiatives to prepare students for examinations that is different to what most students would have experienced in their secondary schooling
Reconstruction Analysis of Galaxy Redshift Surveys: A Hybrid Reconstruction Method
In reconstruction analysis of galaxy redshift surveys, one works backwards
from the observed galaxy distribution to the primordial density field in the
same region, then evolves the primordial fluctuations forward in time with an
N-body code. This incorporates assumptions about the cosmological parameters,
the properties of primordial fluctuations, and the biasing relation between
galaxies and mass. These can be tested by comparing the reconstruction to the
observed galaxy distribution, and to peculiar velocity data. This paper
presents a hybrid reconstruction method that combines the `Gaussianization''
technique of Weinberg(1992) with the dynamical schemes of Nusser & Dekel(1992)
and Gramann(1993). We test the method on N-body simulations and on N-body mock
catalogs that mimic the depth and geometry of the Point Source Catalog Redshift
Survey and the Optical Redshift Survey. This method is more accurate than
Gaussianization or dynamical reconstruction alone. Matching the observed
morphology of clustering can limit the bias factor b, independent of Omega.
Matching the cluster velocity dispersions and z-space distortions of the
correlation function xi(s,mu) constrains the parameter beta=Omega^{0.6}/b.
Relative to linear or quasi-linear approximations, a fully non-linear
reconstruction makes more accurate predictions of xi(s,mu) for a given beta,
thus reducing the systematic biases of beta measurements and offering further
scope for breaking the degeneracy between Omega and b. It also circumvents the
cosmic variance noise that limits conventional analyses of xi(s,mu). It can
also improve the determination of Omega and b from joint analyses of redshift
& peculiar velocity surveys as it predicts the fully non-linear peculiar
velocity distribution at each point in z-space.Comment: 72 pages including 33 figures, submitted to Ap
Two-staging a comeback: A review of two-stage exams from 1996 to 2022
BACKGROUND
Two-stage examinations are an alternative to a traditional examination, where an individual examination is followed by a group examination, often on the same questions. With pandemic remote learning leading to a re-assessment of examination formats, we investigated previous research on two-stage exams to understand how these assessments have been delivered and received by students, and we make suggestions based on this research and our own experience for how to deliver these exams in a large-cohort introductory biology unit. This research was published in the International Journal of Innovation in Science and Mathematics Educations (IJISME; Lee et al., 2022).
AIMS
We aimed to investigate trends in how two-stage exams were set, their discipline context, student performance and the student experience in studies published in the last ~25 years.
DESIGN AND METHODS
We performed a narrative literature review of research papers involving the use of two-stage examinations in STEM, from 1996 to 2022. We extracted from the 39 included studies data about the discipline, the weighting and timing of the group component, the type of questions asked, how groups were formed and the cohort size. We also extracted data on the student’s response: whether scores were higher in the group component, whether the exam improved understanding or retention, whether students favoured the format and whether stress was alleviated.
RESULTS
Trends were identified, with most surveyed exams using multiple-choice questions that were the same in the individual and the group component. Student feedback was very positive, and group component marks were almost always higher than individual component marks. However, results varied on improved understanding and reduction in stress, and few studies tested these factors.
CONCLUSIONS
Two-stage exams are well received by students, and group exams increase performance relative to individual exams. Further research is needed into measurable beneficial effects from the format. We provide our suggestions for implementing these examinations in a large introductory biology unit.
REFERENCE
Lee, T. R. C., Pye, M., Lilje, O., Nguyen, H. D., Hockey, S., de Bruyn, M. and can den Berg, F. T. (2022) Two-stage examinations in STEM: A narrative literature review. International Journal of Innovation in Science and Mathematics Education, 30(5), 73-90
CMB component separation by parameter estimation
We propose a solution to the CMB component separation problem based on
standard parameter estimation techniques. We assume a parametric spectral model
for each signal component, and fit the corresponding parameters pixel by pixel
in a two-stage process. First we fit for the full parameter set (e.g.,
component amplitudes and spectral indices) in low-resolution and high
signal-to-noise ratio maps using MCMC, obtaining both best-fit values for each
parameter, and the associated uncertainty. The goodness-of-fit is evaluated by
a chi^2 statistic. Then we fix all non-linear parameters at their
low-resolution best-fit values, and solve analytically for high-resolution
component amplitude maps. This likelihood approach has many advantages: The
fitted model may be chosen freely, and the method is therefore completely
general; all assumptions are transparent; no restrictions on spatial variations
of foreground properties are imposed; the results may be rigorously monitored
by goodness-of-fit tests; and, most importantly, we obtain reliable error
estimates on all estimated quantities. We apply the method to simulated Planck
and six-year WMAP data based on realistic models, and show that separation at
the muK level is indeed possible in these cases. We also outline how the
foreground uncertainties may be rigorously propagated through to the CMB power
spectrum and cosmological parameters using a Gibbs sampling technique.Comment: 20 pages, 10 figures, submitted to ApJ. For a high-resolution
version, see http://www.astro.uio.no/~hke/docs/eriksen_et_al_fgfit.p
Statistical Determination of Bulk Flow Motions
We present here a new parameterization for the bulk motions of galaxies and
clusters (in the linear regime) that can be measured statistically from the
shape and amplitude of the two-dimensional two-point correlation function. We
further propose the one-dimensional velocity dispersion (v_p) of the bulk flow
as a complementary measure of redshift-space distortions, which is
model-independent and not dependent on the normalisation method. As a
demonstration, we have applied our new methodology to the C4 cluster catalogue
constructed from Data Release Three (DR3) of the Sloan Digital Sky Survey. We
find v_p=270^{+433}km/s (also consistent with v_p=0) for this cluster sample
(at z=0.1), which is in agreement with that predicted for a WMAP5-normalised
LCDM model (i.e., v_p(LCDM=203km/s). This measurement does not lend support to
recent claims of excessive bulk motions (\simeq1000 km/s) which appear in
conflict with LCDM, although our large statistical error cannot rule them out.
From the measured coherent evolution of v_p, we develop a technique to
re-construct the perturbed potential, as well as estimating the unbiased matter
density fluctuations and scale--independent bias.Comment: 8 pages, 5 figure
On the cosmological mass function theory
This paper provides, from one side, a review of the theory of the
cosmological mass function from a theoretical point of view, starting from the
seminal paper of Press & Shechter (1974) to the last developments (Del Popolo &
Gambera (1998, 1999), Sheth & Tormen 1999 (ST), Sheth, Mo & Tormen 2001 (ST1),
Jenkins et al. 2001 (J01), Shet & Tormen 2002 (ST2), Del Popolo 2002a, Yagi et
al. 2004 (YNY)), and from another side some improvements on the multiplicity
function models in literature. ...Comment: Astronomy Reports, in prin
The abundance of lensing protoclusters
Weak gravitational lensing provides a potentially powerful method for the
detection of clusters. In addition to cluster candidates, a large number of
objects with possibly no optical or X-ray component have been detected in
shear-selected samples. We develop an analytic model to investigate the claim
of Weinberg & Kamionkowski (2002) that unvirialised protoclusters account for a
significant number of these so-called "dark" lenses. In our model, a
protocluster consists of a small virialised region surrounded by in-falling
matter. We find that, in order for a protocluster to simultaneously escape
X-ray detection and create a detectable weak lensing signal, it must have a
small virial mass (~10^{13} \Msun) and large total mass (~ 10^{15} \Msun), with
a relatively flat density profile outside of the virial radius. Such objects
would be characterized by rising tangential shear profiles well beyond the
virial radius. We use a semi-analytic approach based on the excursion set
formalism to estimate the abundance of lensing protoclusters with a low
probability of X-ray detection. We find that they are extremely rare,
accounting for less than 0.4 per cent of the total lenses in a survey with
background galaxy density n = 30 arcmin^{-2} and an intrinsic ellipticity
dispersion of 0.3. We conclude that lensing protoclusters with undetectable
X-Ray luminosities are too rare to account for a significant number of dark
lenses.Comment: 18 pages, 10 figures, version accepted by MNRAS (minor changes in
response to referee
- …