1,092 research outputs found

    Stop-Signal Reaction Time Correlates With a Compensatory Balance Response

    Get PDF
    Background Response inhibition involves suppressing automatic, but unwanted action, which allows for behavioral flexibility. This capacity could theoretically contribute to fall prevention, especially in the cluttered environments we face daily. Although much has been learned from cognitive psychology regarding response inhibition, it is unclear if such findings translate to the intensified challenge of coordinating balance recovery reactions. Research question Is the ability to stop a prepotent response preserved when comparing performance on a standard test of response inhibition versus a reactive balance test where compensatory steps must be occasionally suppressed? Methods Twelve young adults completed a stop signal task and reactive balance test separately. The stop signal task evaluates an individual’s ability to quickly suppress a visually-cued button press upon hearing a ‘stop’ tone, and provides a measure of the speed of response inhibition called the Stop Signal Reaction Time (SSRT). Reactive balance was tested by releasing participants from a supported lean position, in situations where the environment was changed during visual occlusion. Upon receiving vision, participants were required to either step to regain balance following cable release (70% of trials), or suppress a step if an obstacle was present (30% of trials). The early muscle response of the stepping leg was compared between the ‘step blocked’ and ‘step allowed’ trials to quantify step suppression. Results SSRT was correlated with muscle activation of the stepping leg when sufficient time was provided to view the response environment (400 ms). Individuals with faster SSRTs exhibited comparably less leg muscle activity when a step was blocked, signifying a superior ability to inhibit an unwanted step. Significance Performance on a standardized test of response inhibition is related to performance on a reactive balance test where automated stepping responses must occasionally be inhibited. This highlights a generalizable neural mechanism for stopping action across different behavioral contexts

    Solution rheology of mesquite gum in comparison with gum arabic

    Get PDF
    ©1995. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ This document is the Accepted, version of a Published Work that appeared in final form in Carbohydrate Polymers. To access the final edited and published work see https://doi.org/10.1016/0144-8617(95)00031-2Commercial samples of mesquite gum and food-grade gum arabic were purified by filtration, alcohol precipitation, and extensive dialysis, and their rheological properties were characterised over the full range of concentrations at which solutions could be prepared (up to ~50% w/w). Both gave typical solution-like mechanical spectra, with close Cox-Merz superposition of 71(-y) and 71*(w) and only slight shear thinning at the highest accessible concentrations, and (In 71rei)/c varied linearly with log c from below 2% w/w to above 50%. The intrinsic viscosity of mesquite gum q11] � 0· 11 di g-1 ) was appreciably lower than that of gum arabic ([71] � 0-19 dig- in 0-1 M NaCl at 20°C), and was independent of ionic strength above I� 0-05, indicating a compact structure capable of only limited contraction. Departures from dilute-solution behaviour (71 ~ c1 • 4) occurred at c [71] � I for both materials, with a progressive increase in concentration dependence at higher space-occupancy, behaviour typical of soft, deformable particles, rather than of interpenetrating macromolecules. The increase in viscosity with increasing concentration was steeper for mesquite, consistent with evidence from size-exclusion chromatography and dynamic light scattering that the larger (and presumably more deformable) 'wattle blossom' component of gum arabic was absent from the mesquite gum sample

    Diffusive propagation of cosmic rays from supernova remnants in the Galaxy. II: anisotropy

    Full text link
    We investigate the effects of stochasticity in the spatial and temporal distribution of supernova remnants on the anisotropy of cosmic rays observed at Earth. The calculations are carried out for different choices of the diffusion coefficient D(E) for propagation in the Galaxy. The propagation and spallation of nuclei are taken into account. At high energies we assume that D(E)(E/Z)δD(E)\sim(E/Z)^{\delta}, with δ=1/3\delta=1/3 and δ=0.6\delta=0.6 being the reference scenarios. The large scale distribution of supernova remnants in the Galaxy is modeled following the distribution of pulsars with and without accounting for the spiral structure of the Galaxy. Our calculations allow us to determine the contribution to anisotropy resulting from both the large scale distribution of SNRs in the Galaxy and the random distribution of the nearest remnants. The naive expectation that the anisotropy amplitude scales as D(E) is shown to be an oversimplification which does not reflect in the predicted anisotropy for any realistic distribution of the sources. The fluctuations in the anisotropy pattern are dominated by nearby sources, so that predicting or explaining the observed anisotropy amplitude and phase becomes close to impossible. We find however that the very weak energy dependence of the anisotropy amplitude below 10510^{5} GeV and the rise at higher energies, can best be explained if the diffusion coefficient is D(E)E1/3D(E)\sim E^{1/3}. Faster diffusion, for instance with δ=0.6\delta=0.6, leads in general to an exceedingly large anisotropy amplitude. The spiral structure introduces interesting trends in the energy dependence of the anisotropy pattern, which qualitatively reflect the trend seen in the data. For large values of the halo size we find that the anisotropy becomes dominated by the large scale regular structure of the source distribution, leading indeed to a monotonic increase of δA\delta_A with energy.Comment: 21 Pages, to appear in JCA

    Diffusive propagation of cosmic rays from supernova remnants in the Galaxy. I: spectrum and chemical composition

    Full text link
    In this paper we investigate the effect of stochasticity in the spatial and temporal distribution of supernova remnants on the spectrum and chemical composition of cosmic rays observed at Earth. The calculations are carried out for different choices of the diffusion coefficient D(E) experienced by cosmic rays during propagation in the Galaxy. In particular, at high energies we assume that D(E)\sim E^{\delta}, with δ=1/3\delta=1/3 and δ=0.6\delta=0.6 being the reference scenarios. The large scale distribution of supernova remnants in the Galaxy is modeled following the distribution of pulsars, with and without accounting for the spiral structure of the Galaxy. We find that the stochastic fluctuations induced by the spatial and temporal distribution of supernovae, together with the effect of spallation of nuclei, lead to mild but sensible violations of the simple, leaky-box-inspired rule that the spectrum observed at Earth is N(E)EαN(E)\propto E^{-\alpha} with α=γ+δ\alpha=\gamma+\delta, where γ\gamma is the slope of the cosmic ray injection spectrum at the sources. Spallation of nuclei, even with the small rates appropriate for He, may account for slight differences in spectral slopes between different nuclei, providing a possible explanation for the recent CREAM observations. For δ=1/3\delta=1/3 we find that the slope of the proton and helium spectra are 2.67\sim 2.67 and 2.6\sim 2.6 respectively at energies above 1 TeV (to be compared with the measured values of 2.66±0.022.66\pm 0.02 and 2.58±0.022.58\pm 0.02). For δ=0.6\delta=0.6 the hardening of the He spectra is not observed. We also comment on the effect of time dependence of the escape of cosmic rays from supernova remnants, and of a possible clustering of the sources in superbubbles. In a second paper we will discuss the implications of these different scenarios for the anisotropy of cosmic rays.Comment: 28 pages, To appear in JCA

    Nonclassical correlations of phase noise and photon number in quantum nondemolition measurements

    Get PDF
    The continuous transition from a low resolution quantum nondemolition measurement of light field intensity to a precise measurement of photon number is described using a generalized measurement postulate. In the intermediate regime, quantization appears as a weak modulation of measurement probability. In this regime, the measurement result is strongly correlated with the amount of phase decoherence introduced by the measurement interaction. In particular, the accidental observation of half integer photon numbers preserves phase coherence in the light field, while the accidental observation of quantized values increases decoherence. The quantum mechanical nature of this correlation is discussed and the implications for the general interpretation of quantization are considered.Comment: 16 pages, 5 figures, final version to be published in Phys. Rev. A, Clarifications of the nature of the measurement result and the noise added in section I

    SARS-CoV-2 testing in North Carolina: Racial, ethnic, and geographic disparities

    Get PDF
    SARS-CoV-2 testing data in North Carolina during the first three months of the state's COVID-19 pandemic were analyzed to determine if there were disparities among intersecting axes of identity including race, Latinx ethnicity, age, urban-rural residence, and residence in a medically underserved area. Demographic and residential data were used to reconstruct patterns of testing metrics (including tests per capita, positive tests per capita, and test positivity rate which is an indicator of sufficient testing) across race-ethnicity groups and urban-rural populations separately. Across the entire sample, 13.1% (38,750 of 295,642) of tests were positive. Within racial-ethnic groups, 11.5% of all tests were positive among non-Latinx (NL) Whites, 22.0% for NL Blacks, and 66.5% for people of Latinx ethnicity. The test positivity rate was higher among people living in rural areas across all racial-ethnic groups. These results suggest that in the first three months of the COVID-19 pandemic, access to COVID-19 testing in North Carolina was not evenly distributed across racial-ethnic groups, especially in Latinx, NL Black and other historically marginalized populations, and further disparities existed within these groups by gender, age, urban-rural status, and residence in a medically underserved area

    The Strong CP Problem and Axions

    Get PDF
    I describe how the QCD vacuum structure, necessary to resolve the U(1)AU(1)_A problem, predicts the presence of a P, T and CP violating term proportional to the vacuum angle θˉ\bar{\theta}. To agree with experimental bounds, however, this parameter must be very small (θˉ109(\bar{\theta} \leq 10^{-9}). After briefly discussing some possible other solutions to this, so-called, strong CP problem, I concentrate on the chiral solution proposed by Peccei and Quinn which has associated with it a light pseudoscalar particle, the axion. I discuss in detail the properties and dynamics of axions, focusing particularly on invisible axion models where axions are very light, very weakly coupled and very long-lived. Astrophysical and cosmological bounds on invisible axions are also briefly touched upon.Comment: 14 pages, to appear in the Lecture Notes in Physics volume on Axions, (Springer Verlag

    Quantum measurement as driven phase transition: An exactly solvable model

    Get PDF
    A model of quantum measurement is proposed, which aims to describe statistical mechanical aspects of this phenomenon, starting from a purely Hamiltonian formulation. The macroscopic measurement apparatus is modeled as an ideal Bose gas, the order parameter of which, that is, the amplitude of the condensate, is the pointer variable. It is shown that properties of irreversibility and ergodicity breaking, which are inherent in the model apparatus, ensure the appearance of definite results of the measurement, and provide a dynamical realization of wave-function reduction or collapse. The measurement process takes place in two steps: First, the reduction of the state of the tested system occurs over a time of order /(TN1/4)\hbar/(TN^{1/4}), where TT is the temperature of the apparatus, and NN is the number of its degrees of freedom. This decoherence process is governed by the apparatus-system interaction. During the second step classical correlations are established between the apparatus and the tested system over the much longer time-scale of equilibration of the apparatus. The influence of the parameters of the model on non-ideality of the measurement is discussed. Schr\"{o}dinger kittens, EPR setups and information transfer are analyzed.Comment: 35 pages revte

    Signature of small rings in the Raman spectra of normal and compressed amorphous silica: A combined classical and ab initio study

    Full text link
    We calculate the parallel (VV) and perpendicular (VH) polarized Raman spectra of amorphous silica. Model SiO2 glasses, uncompressed and compressed, were generated by a combination of classical and ab initio molecular-dynamics simulations and their dynamical matrices were computed within the framework of the density functional theory. The Raman scattering intensities were determined using the bond-polarizability model and a good agreement with experimental spectra was found. We confirm that the modes associated to the fourfold and threefold rings produce most of the Raman intensity of the D1 and D2 peaks, respectively, in the VV Raman spectra. Modifications of the Raman spectra upon compression are found to be in agreement with experimental data. We show that the modes associated to the fourfold rings still exist upon compression but do not produce a strong Raman intensity, whereas the ones associated to the threefold rings do. This result strongly suggests that the area under the D1 and D2 peaks is not directly proportional to the concentration of small rings in amorphous SiO2.Comment: 21 pages, 8 figures. Phys. Rev. B, in pres

    Parity proofs of the Bell-Kochen-Specker theorem based on the 600-cell

    Full text link
    The set of 60 real rays in four dimensions derived from the vertices of a 600-cell is shown to possess numerous subsets of rays and bases that provide basis-critical parity proofs of the Bell-Kochen-Specker (BKS) theorem (a basis-critical proof is one that fails if even a single basis is deleted from it). The proofs vary considerably in size, with the smallest having 26 rays and 13 bases and the largest 60 rays and 41 bases. There are at least 90 basic types of proofs, with each coming in a number of geometrically distinct varieties. The replicas of all the proofs under the symmetries of the 600-cell yield a total of almost a hundred million parity proofs of the BKS theorem. The proofs are all very transparent and take no more than simple counting to verify. A few of the proofs are exhibited, both in tabular form as well as in the form of MMP hypergraphs that assist in their visualization. A survey of the proofs is given, simple procedures for generating some of them are described and their applications are discussed. It is shown that all four-dimensional parity proofs of the BKS theorem can be turned into experimental disproofs of noncontextuality.Comment: 19 pages, 11 tables, 3 figures. Email address of first author has been corrected. Ref.[5] has been corrected, as has an error in Fig.3. Formatting error in Sec.4 has been corrected and the placement of tables and figures has been improved. A new paragraph has been added to Sec.4 and another new paragraph to the end of the Appendi
    corecore