12,455 research outputs found
Recommended from our members
Developments in information technology and their implications for psychological research: Disruptive or diffusive change?
The notion of technology-induced disruptive change has generally been applied within academia to teaching and learning. Less explored is the disruption that occurs to research as mainstream technology develops. This article examines the effects of technological change on research in psychology, in particular focussing on the development of web-based empirical research procedures over the past 15 years or so. I discuss the history, challenges and potential of these developments, and put forward some qualified suggestions for some of the future directions that technology will allow research in psychology to take
The warm circumstellar envelope and wind of the G9 IIb star HR 6902
IUE observations of the eclipsing binary system HR 6902 obtained at various
epochs spread over four years indicate the presence of warm circumstellar
material enveloping the G9 IIb primary. The spectra show Si IV and C IV
absorption up to a distance of 3.3 giant radii (R_g}. Line ratio diagnostics
yields an electron temperature of ~ 78000 K which appears to be constant over
the observed height range.
Applying a least square fit absorption line analysis we derive column
densities as a function of height. We find that the inner envelope (< 3 R_g) of
the bright giant is consistent with a hydrostatic density distribution. The
derived line broadening velocity of ~ 70 kms^{-1} is sufficient to provide
turbulent pressure support for the required scale height. However, an improved
agreement with observations over the whole height regime including the emission
line region is obtained with an outflow model. We demonstrate that the common
beta power-law as well as a P \propto rho wind yield appropriate fit models.
Adopting a continuous mass outflow we obtain a mass-loss rate of M_loss= (0.8 -
3.4)*10^{-11} M_{sun}yr^{-1} depending on the particular wind model.Comment: 11 pages, 8 figures, submitted to Astronomy Astrophysics main Journa
Adobe Flash as a medium for online experimentation: a test of reaction time measurement capabilities
Adobe Flash can be used to run complex psychological experiments over the Web. We examined the reliability of using Flash to measure reaction times (RTs) using a simple binary-choice task implemented both in Flash and in a Linux-based system known to record RTs with millisecond accuracy. Twenty-four participants were tested in the laboratory using both implementations; they also completed the Flash version on computers of their own choice outside the lab. RTs from the trials run on Flash outside the lab were approximately 20 msec slower than those from trials run on Flash in the lab, which in turn were approximately 10 msec slower than RTs from the trials run on the Linux-based system (baseline condition). RT SDs were similar in all conditions, suggesting that although Flash may overestimate RTs slightly, it does not appear to add significant noise to the data recorded
Using Adobe Flash Lite on mobile phones for psychological research: reaction time measurement reliability and inter-device variability
Mobile telephones have significant potential for use in psychological research, possessing unique characteristics—not least their ubiquity—that may make them useful tools for psychologists. We examined whether it is possible to measure reaction times (RTs) accurately using Adobe Flash Lite on mobile phones. We ran simple and choice RT experiments on two widely available mobile phones, a Nokia 6110 Navigator and a Sony Ericsson W810i, using a wireless application protocol (WAP) connection to access the Internet from the devices. RTs were compared within subjects with those obtained using a Linux-based millisecond-accurate measurement system. Results show that measured RTs were significantly longer on mobile devices, and that overall RTs and distribution of RTs varied across device
The Fluctuating Intergalactic Radiation Field at Redshifts z = 2.3-2.9 from He II and H I Absorption towards HE 2347-4342
We provide an in-depth analysis of the He II and H I absorption in the
intergalactic medium (IGM) at redshifts z = 2.3-2.9 toward HE 2347-4342, using
spectra from the Far Ultraviolet Spectroscopic Explorer (FUSE) and the
Ultraviolet-Visual Echelle Spectrograph (UVES) on the VLT telescope. Following
up on our earlier study (Kriss et al. 2001, Science, 293, 1112), we focus here
on two major topics: (1) small-scale variability (Delta z = 10^-3) in the ratio
eta = N(He II)/N(H I); and (2) an observed correlation of high-eta absorbers
(soft radiation fields) with voids in the (H I) Ly-alpha distribution. These
effects may reflect fluctuations in the ionizing sources on scales of 1 Mpc,
together with radiative transfer through a filamentary IGM whose opacity
variations control the penetration of 1-5 ryd radiation over 30-40 Mpc
distances. Owing to photon statistics and backgrounds, we can measure optical
depths over the ranges 0.1 < tau(HeII) < 2.3 and 0.02 < tau(HI) < 3.9, and
reliably determine values of eta = 4 tau(HeII)/tau(HI) over the range 0.1 to
460. Values of eta = 20-200 are consistent with models of photoionization by
quasars with observed spectral indices alpha_s = 0-3. Values of eta > 200 may
require additional contributions from starburst galaxies, heavily filtered
quasar radiation, or density variations. Regions with eta < 30 may indicate the
presence of local hard sources. We find that eta is higher in "void" regions,
where H I is weak or undetected and 80% of the path length has eta > 100. These
voids may be ionized by soft sources (dwarf starbursts) or by QSO radiation
softened by escape from the AGN cores or transfer through the "cosmic web". The
apparent differences in ionizing spectra may help to explain the 1.45 Gyr lag
between the reionization epochs, z(HI) = 6.2 +/-0.2 and z(HeII) = 2.8 +/-0.2.Comment: 27 pages, 7 figures, to appear in Ap
The QSO evolution derived from the HBQS and other complete QSO surveys
An ESO Key programme dedicated to an Homogeneous Bright QSO Survey (HBQS) has
been completed. 327 QSOs (Mb<-23, 0.3<z<2.2) have been selected over 555 deg^2
with 15<B<18.75. For B<16.4 the QSO surface density turns out to be a factor
2.2 higher than what measured by the PG survey, corresponding to a surface
density of 0.013+/-.006 deg^{-2}. If the Edinburgh QSO Survey is included, an
overdensity of a factor 2.5 is observed, corresponding to a density of
0.016+/-0.005 deg^{-2}. In order to derive the QSO optical luminosity function
(LF) we used Monte Carlo simulations that take into account of the selection
criteria, photometric errors and QSO spectral slope distribution. The LF can be
represented with a Pure Luminosity Evolution (L(z)\propto(1+z)^k) of a two
power law both for q_0=0.5 and q_0=0.1. For q_0=0.5 k=3.26, slower than the
previous Boyle's (1992) estimations of k=3.45. A flatter slope beta=-3.72 of
the bright part of the LF is also required. The observed overdensity of bright
QSOs is concentrated at z<0.6. It results that in the range 0.3<z<0.6 the
luminosity function is flatter than observed at higher redshifts. In this
redshift range, for Mb<-25, 32 QSOs are observed instead of 19 expected from
our best-fit PLE model. This feature requires a luminosity dependent luminosity
evolution in order to satisfactorily represent the data in the whole 0.3<z<2.2
interval.Comment: Invited talk in "Wide Field Spectroscopy" (20-24 May 1996, Athens),
eds. M. Kontizas et al. 6 pages and 3 eps figures, LaTex file, uses epfs.sty
and crckapb.sty (included
Alternative Weighting Schemes for ELMo Embeddings
ELMo embeddings (Peters et. al, 2018) had a huge impact on the NLP community
and may recent publications use these embeddings to boost the performance for
downstream NLP tasks. However, integration of ELMo embeddings in existent NLP
architectures is not straightforward. In contrast to traditional word
embeddings, like GloVe or word2vec embeddings, the bi-directional language
model of ELMo produces three 1024 dimensional vectors per token in a sentence.
Peters et al. proposed to learn a task-specific weighting of these three
vectors for downstream tasks. However, this proposed weighting scheme is not
feasible for certain tasks, and, as we will show, it does not necessarily yield
optimal performance. We evaluate different methods that combine the three
vectors from the language model in order to achieve the best possible
performance in downstream NLP tasks. We notice that the third layer of the
published language model often decreases the performance. By learning a
weighted average of only the first two layers, we are able to improve the
performance for many datasets. Due to the reduced complexity of the language
model, we have a training speed-up of 19-44% for the downstream task
- …