948 research outputs found

    A bias-free test of human temporal bisection: Evidence against bisection at the arithmetic mean

    Get PDF
    The temporal bisection procedure has been used to assess theories of time perception. A problem with the procedure for measuring the perceived midpoint of two durations is that the spacing of probe durations affects the length of the bisection point. Linear spacing results in longer bisection points closer to the arithmetic mean of the durations than logarithmic spacing. In three experiments, the influence of probe duration distribution was avoided by presenting a single probe duration of either the arithmetic or geometric mean of the trained durations. It was found that the number of participants that categorised the arithmetic mean as long was significantly larger than those that categorised it as short. The number of participants that categorised the geometric mean as either short or long did not significantly differ. This was true for trained durations of 0.4s vs. 1.6s (Experiments 1-3), 0.2s vs. 3.2s (Experiment 2) and 0.4s vs. 6.4s (Experiment 3). In Experiment 4, the probe trial distribution effect was replicated with logarithmic and linearly distributed probe durations, demonstrating that bisection occurs close to the arithmetic mean with linearly spaced probe durations. The results provide evidence against bisection at the arithmetic mean when probe spacing bias is avoided and, instead, the results are consistent with logarithmic encoding of time, or a comparison rule based on relative rather than absolute differences. [Abstract copyright: Copyright © 2023. Published by Elsevier B.V.

    Assessment of the calibration of gamma spectrometry systems in forest environments

    Get PDF
    A Monte Carlo simulation was used to develop a model of the response of a portable gamma spectrometry system in forest environments. This model was used to evaluate any corrections needed to measurements of 137Cs activity per unit area calibrated assuming an open field geometry. These were shown to be less than 20% for most forest environments. The model was also used to assess the impact of activity in the canopy on ground level measurements. For similar activity per unit area in the lower parts of the canopy as on the ground, 10-25% of the ground based measurement would be due to activity in the canopy, depending on the depth profile in the soil. The model verifies that an optional collimator cap can assess activity in the canopy by repeat survey

    Review of nuclear data for naturally occurring radionuclides applied to environmental applications

    Get PDF
    Accurate nuclear data, commonly using evaluated libraries, is essential in many applications, allowing confidence in derived parameters. An approach to assess the confidence with which these data can be used is proposed, not previously reported, comparing nuclear data presented by different evaluations. Variations between evaluations are used as an indication of potential inaccuracies in the nuclear data or evaluation procedure, and the relevant primary literature reviewed more fully. Applying this approach to naturally occurring radionuclides has identified eight radionuclides where the evaluations differ significantly. Where recommended data are supported by a single set of high precision measurements, independent verification of those measurements will increase confidence in the accuracy of the data (214Bi and 214Pb). Further measurements should be conducted where the decay schemes are incomplete ( 228Ac and 228Ra). For 40K, the mean beta energy in all the evaluations has been calculated using an incorrect shape factor, and log ft and branching ratios have been calculated using an inappropriate program. Precise measurements of beta spectra will allow the use of experimentally derived shape factors for the calculation of mean beta energies (40K and 210Bi). Parameters used for infinite matrix dose rate and geothermal heat production calculations have been derived for the data discussed here

    Localized vs distributed deformation associated with the linkage history of an active normal fault, Whakatane Graben, New Zealand

    No full text
    The deformation associated with an active normal fault is investigated at a high temporal resolution (c. 104 yr). The Rangitaiki Fault (Whakatane Graben, New Zealand) and its adjacent faults accommodated an overall extension of ?0.83% oriented at ?N324°E over the past 17 kyr. This is consistent along strike, but the pattern of faulting that accommodates this strain defines two different spatial domains. To the SW, one domain is characterized by a few large faults, with >80% of strain localized onto geometrically and kinematically linked segments of the main fault. This produces marked heterogeneity in the spatial distribution of strain across the graben. In contrast, to the NE, a domain of distributed faulting is characterized by numerous small faults contributing to the overall deformation, with only ?35% of strain localized onto the Rangitaiki Fault. The transition from distributed to localized deformation is attributed to an increase in linkage maturity of the Rangitaiki Fault. Progressive strain localization has been ongoing within the network over the last 17 kyr, with localization of fault activity increasing by ?12%, indicating this process occurs over kyr time periods that only reflect a few earthquake events

    Tackling concentrated worklessness: integrating governance and policy across and within spatial scales

    Get PDF
    Spatial concentrations of worklessness remained a key characteristic of labour markets in advanced industrial economies, even during the period of decline in aggregate levels of unemployment and economic inactivity evident from the late 1990s to the economic downturn in 2008. The failure of certain localities to benefit from wider improvements in regional and national labour markets points to a lack of effectiveness in adopted policy approaches, not least in relation to the governance arrangements and policy delivery mechanisms that seek to integrate residents of deprived areas into wider local labour markets. Through analysis of practice in the British context, we explore the difficulties of integrating economic and social policy agendas within and across spatial scales to tackle problems of concentrated worklessness. We present analysis of a number of selected case studies aimed at reducing localised worklessness and identify the possibilities and constraints for effective action given existing governance arrangements and policy priorities to promote economic competitiveness and inclusion

    Continual trials spontaneous recognition tasks in mice: reducing animal numbers and improving our understanding of the mechanisms underlying memory

    Get PDF
    Spontaneous recognition tasks are widely used as a laboratory measure of memory in animals but give rise to high levels of behavioural noise leading to a lack of reliability. Previous work has shown that a modification of the procedure to allow continual trials testing (in which many trials are run concurrently in a single session) decreases behavioural noise and thus significantly reduces the numbers of rats required to retain statistical power. Here we demonstrate for the first time that this improved method of testing extends to mice, increasing the overall power of the approach. Moreover, our results show that the new continual trials approach provides the additional benefits of heightened sensitivity and thus provides greater insight into the mechanisms at play. Standard (c57) and transgenic Alzheimer model (TASTPM) mice were tested both at 7 and 10 months of age in both object recognition (OR) and object location (OL) spontaneous recognition tasks using the continual trials methodology. Both c57 and TASTPM mice showed age-dependent changes in performance in OR. While c57 mice also showed age-related changes in performance of OL, TASTPM mice were unable to perform OL at either age. Significantly, we demonstrate that differences in OL performance in c57s and TASTPM animals is a result of proactive interference rather than an absolute inability to recognise object-location combinations. We argue that these continual trials approaches provide overall improved reliability and better interpretation of the memory ability of mice, as well as providing a significant reduction in overall animal use

    Abundance profiles and cool cores in galaxy groups

    Full text link
    Using data from the Two Dimensional XMM-Newton Group Survey (2dXGS), we have examined the abundance profile properties of both cool core (CC) and non cool core (NCC) galaxy groups. The ten NCC systems in our sample represent a population which to date has been poorly studied in the group regime. Fitting the abundance profiles as a linear function of log radius, we find steep abundance gradients in cool core (CC) systems, with a slope of -0.54+/-0.07. In contrast, non cool core (NCC) groups have profiles consistent with uniform metallicity. Many CC groups show a central abundance dip or plateau, and we find evidence for anticorrelation between the core abundance gradient and the 1.4 GHz radio power of the brightest group galaxy (BGG) in CC systems. This may indicate the effect of AGN-driven mixing within the central ~0.1r_500. It is not possible to discern whether such behaviour is present in the NCC groups, due to the small and diverse sample with the requisite radio data. The lack of strong abundance gradients in NCC groups, coupled with their lack of cool core, and evidence for enhanced substructure, leads us to favour merging as the mechanism for disrupting cool cores, although we cannot rule out disruption by a major AGN outburst. Given the implied timescales, the disruptive event must have occurred within the past few Gyrs in most NCC groups.Comment: 15 pages, 12 figures, accepted for publication in MNRA
    • …
    corecore