418 research outputs found
Incorporating measurement error in n=1 psychological autoregressive modeling
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.</p
Full Stokes polarimetric observations with a single-dish radio-telescope
The study of the linear and circular polarization in AGN allows one to gain
detailed information about the properties of the magnetic fields in these
objects. However, especially the observation of circular polarization (CP) with
single-dish radio-telescopes is usually difficult because of the weak signals
to be expected. Normally CP is derived as the (small) difference of two large
numbers (LHC and RHC); hence an accurate calibration is absolutely necessary.
Our aim is to improve the calibration accuracy to include the Stokes parameter
V in the common single-dish polarimetric measurements, allowing a full Stokes
study of the source under examination. A detailed study, up to the 2nd order,
of the Mueller matrix elements in terms of cross-talk components allows us to
reach the accuracy necessary to study circular polarization. The new
calibration method has been applied to data taken at the 100-m Effelsberg
radio-telescope during regular test observations of extragalactic sources at
2.8, 3.6, 6 and 11 cm. The D-terms in phase and amplitude appear very stable
with time and the few known values of circular polarization have been
confirmed. It is shown that, whenever a classical receiver and a multiplying
polarimeter are available, the proposed calibration scheme allows one to
include Stokes V in standard single-dish polarimetric observations as
difference of two native circular outputs.Comment: 10 pages, to be published in A&
Capillary pressure of van der Waals liquid nanodrops
The dependence of the surface tension on a nanodrop radius is important for
the new-phase formation process. It is demonstrated that the famous Tolman
formula is not unique and the size-dependence of the surface tension can
distinct for different systems. The analysis is based on a relationship between
the surface tension and disjoining pressure in nanodrops. It is shown that the
van der Waals interactions do not affect the new-phase formation thermodynamics
since the effect of the disjoining pressure and size-dependent component of the
surface tension cancel each other.Comment: The paper is dedicated to the 80th anniversary of A.I. Rusano
Polarization leakage in epoch of reionization windows – I. Low Frequency Array observations of the 3C196 field
Detection of the 21-cm signal coming from the epoch of reionization (EoR) is challenging especially because, even after removing the foregrounds, the residual Stokes I maps contain leakage from polarized emission that can mimic the signal. Here, we discuss the instrumental polarization of LOFAR and present realistic simulations of the leakages between Stokes parameters. From the LOFAR observations of polarized emission in the 3C196 field, we have quantified the level of polarization leakage caused by the nominal model beam of LOFAR, and compared it with the EoR signal using power spectrum analysis. We found that at 134– 166 MHz, within the central 4◦ of the field the (Q,U)→I leakage power is lower than the EoR signal at k<0.3 Mpc−¹. The leakage was found to be localized around a Faraday depth of 0, and the rms of the leakage as a fraction of the rms of the polarized emission was shown to vary between 0.2–0.3%, both of which could be utilized in the removal of leakage. Moreover, we could define an ‘EoR window’ in terms of the polarization leakage in the cylindrical power spectrum above the PSF-induced wedge and below k∥∼0.5 Mpc−¹, and the window extended up to k∥∼1 Mpc−¹ at all k⊥ when 70% of the leakage had been removed. These LOFAR results show that even a modest polarimetric calibration over a field of view of ≲4∘ in the future arrays like SKA will ensure that the polarization leakage remains well below the expected EoR signal at the scales of 0.02–1 Mpc−¹
A very brief description of LOFAR - the Low Frequency Array
LOFAR (Low Frequency Array) is an innovative radio telescope optimized for
the frequency range 30-240 MHz. The telescope is realized as a phased aperture
array without any moving parts. Digital beam forming allows the telescope to
point to any part of the sky within a second. Transient buffering makes
retrospective imaging of explosive short-term events possible. The scientific
focus of LOFAR will initially be on four key science projects (KSPs): 1)
detection of the formation of the very first stars and galaxies in the universe
during the so-called epoch of reionization by measuring the power spectrum of
the neutral hydrogen 21-cm line (Shaver et al. 1999) on the ~5' scale; 2)
low-frequency surveys of the sky with of order expected new sources; 3)
all-sky monitoring and detection of transient radio sources such as gamma-ray
bursts, x-ray binaries, and exo-planets (Farrell et al. 2004); and 4) radio
detection of ultra-high energy cosmic rays and neutrinos (Falcke & Gorham 2003)
allowing for the first time access to particles beyond 10^21 eV (Scholten et
al. 2006). Apart from the KSPs open access for smaller projects is also
planned. Here we give a brief description of the telescope.Comment: 2 pages, IAU GA 2006, Highlights of Astronomy, Volume 14, K.A. van
der Hucht, e
Polarized point sources in the LOFAR Two-meter Sky Survey: A preliminary catalog
The polarization properties of radio sources at very low frequencies (h45m–15h30m right ascension, 45°–57° declination, 570 square degrees). We have produced a catalog of 92 polarized radio sources at 150 MHz at 4.′3 resolution and 1 mJy rms sensitivity, which is the largest catalog of polarized sources at such low frequencies. We estimate a lower limit to the polarized source surface density at 150 MHz, with our resolution and sensitivity, of 1 source per 6.2 square degrees. We find that our Faraday depth measurements are in agreement with previous measurements and have significantly smaller errors. Most of our sources show significant depolarization compared to 1.4 GHz, but there is a small population of sources with low depolarization indicating that their polarized emission is highly localized in Faraday depth. We predict that an extension of this work to the full LOTSS data would detect at least 3400 polarized sources using the same methods, and probably considerably more with improved data processing
First LOFAR observations at very low frequencies of cluster-scale non-thermal emission: the case of Abell 2256
Abell 2256 is one of the best known examples of a galaxy cluster hosting
large-scale diffuse radio emission that is unrelated to individual galaxies. It
contains both a giant radio halo and a relic, as well as a number of head-tail
sources and smaller diffuse steep-spectrum radio sources. The origin of radio
halos and relics is still being debated, but over the last years it has become
clear that the presence of these radio sources is closely related to galaxy
cluster merger events. Here we present the results from the first LOFAR Low
band antenna (LBA) observations of Abell 2256 between 18 and 67 MHz. To our
knowledge, the image presented in this paper at 63 MHz is the deepest ever
obtained at frequencies below 100 MHz in general. Both the radio halo and the
giant relic are detected in the image at 63 MHz, and the diffuse radio emission
remains visible at frequencies as low as 20 MHz. The observations confirm the
presence of a previously claimed ultra-steep spectrum source to the west of the
cluster center with a spectral index of -2.3 \pm 0.4 between 63 and 153 MHz.
The steep spectrum suggests that this source is an old part of a head-tail
radio source in the cluster. For the radio relic we find an integrated spectral
index of -0.81 \pm 0.03, after removing the flux contribution from the other
sources. This is relatively flat which could indicate that the efficiency of
particle acceleration at the shock substantially changed in the last \sim 0.1
Gyr due to an increase of the shock Mach number. In an alternative scenario,
particles are re-accelerated by some mechanism in the downstream region of the
shock, resulting in the relatively flat integrated radio spectrum. In the radio
halo region we find indications of low-frequency spectral steepening which may
suggest that relativistic particles are accelerated in a rather inhomogeneous
turbulent region.Comment: 13 pages, 13 figures, accepted for publication in A\&A on April 12,
201
The Gaussian graphical model in cross-sectional and time-series data
We discuss the Gaussian graphical model (GGM; an undirected network of
partial correlation coefficients) and detail its utility as an exploratory data
analysis tool. The GGM shows which variables predict one-another, allows for
sparse modeling of covariance structures, and may highlight potential causal
relationships between observed variables. We describe the utility in 3 kinds of
psychological datasets: datasets in which consecutive cases are assumed
independent (e.g., cross-sectional data), temporally ordered datasets (e.g., n
= 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In
time-series analysis, the GGM can be used to model the residual structure of a
vector-autoregression analysis (VAR), also termed graphical VAR. Two network
models can then be obtained: a temporal network and a contemporaneous network.
When analyzing data from multiple subjects, a GGM can also be formed on the
covariance structure of stationary means---the between-subjects network. We
discuss the interpretation of these models and propose estimation methods to
obtain these networks, which we implement in the R packages graphicalVAR and
mlVAR. The methods are showcased in two empirical examples, and simulation
studies on these methods are included in the supplementary materials.Comment: Accepted pending revision in Multivariate Behavioral Researc
The Intrinsic Resolution Limit in the Atomic Force Microscope: Implications for Heights of Nano-Scale Features
Background; Accurate mechanical characterization by the atomic force microscope at the highest spatial resolution requires that topography is deconvoluted from indentation. The measured height of nanoscale features in the atomic force microscope (AFM) is almost always smaller than the true value, which is often explained away as sample deformation, the formation of salt deposits and/or dehydration. We show that the real height of nano-objects cannot be obtained directly: a result arising as a consequence of the local probe-sample geometry.
Methods and Findings; We have modeled the tip-surface-sample interaction as the sum of the interaction between the tip and the surface and the tip and the sample. We find that the dynamics of the AFM cannot differentiate between differences in force resulting from 1) the chemical and/or mechanical characteristics of the surface or 2) a step in topography due to the size of the sample; once the size of a feature becomes smaller than the effective area of interaction between the AFM tip and sample, the measured height is compromised. This general result is a major contributor to loss of height and can amount to up to ∼90% for nanoscale features. In particular, these very large values in height loss may occur even when there is no sample deformation, and, more generally, height loss does not correlate with sample deformation. DNA and IgG antibodies have been used as model samples where experimental height measurements are shown to closely match the predicted phenomena.
Conclusions; Being able to measure the true height of single nanoscale features is paramount in many nanotechnology applications since phenomena and properties in the nanoscale critically depend on dimensions. Our approach allows accurate predictions for the true height of nanoscale objects and will lead to reliable mechanical characterization at the highest spatial resolution
- …