2,103 research outputs found
Agreement of a Novel Vertical Jump System to Measure Vertical Jump Height: Brower Vertical Jump and Vertec Vertical Jump Systems
Topics in Exercise Science and Kinesiology Volume 2: Issue 1, Article 12, 2021. Validity refers to the ability of a device to measure what it was intended to measure. Therefore, purpose of this study was to assess the validity and reliability of a novel vertical jump height tool designed by Brower Timing Systems (Salt Lake City, Ut). The Brower vertical jump system was compared to the Vertec jump tester. A convenience sample (n=67) of college students performed three maximum countermovement jumps, with the average score being recorded. Data was collected simultaneously for both devices. Results showed a strong and statistically significant correlation between the Vertec vertical jump tester and Brower vertical jump (r = 0.971, p \u3c 0.001.) A paired t-test showed no significant difference (p = 0.170, t = 1.386) between the two systems. An analysis of equivalence was also performed with alpha set at 0.05 and an upper and lower bound set at +/- 0.5. The observed effect was statistically not different from zero and statistically equivalent to zero. Based on the statistical analysis, it can be concluded the Vertec and Brower vertical jump height systems have a high correlation and are equivalent. The Brower system can be an option for assessing vertical jump height, specifically, the Brower system may be useful for high throughput field environments such as testing teams or larger groups to provide valid data
Use and Abuse of the Fisher Information Matrix in the Assessment of Gravitational-Wave Parameter-Estimation Prospects
The Fisher-matrix formalism is used routinely in the literature on
gravitational-wave detection to characterize the parameter-estimation
performance of gravitational-wave measurements, given parametrized models of
the waveforms, and assuming detector noise of known colored Gaussian
distribution. Unfortunately, the Fisher matrix can be a poor predictor of the
amount of information obtained from typical observations, especially for
waveforms with several parameters and relatively low expected signal-to-noise
ratios (SNR), or for waveforms depending weakly on one or more parameters, when
their priors are not taken into proper consideration. In this paper I discuss
these pitfalls; show how they occur, even for relatively strong signals, with a
commonly used template family for binary-inspiral waveforms; and describe
practical recipes to recognize them and cope with them.
Specifically, I answer the following questions: (i) What is the significance
of (quasi-)singular Fisher matrices, and how must we deal with them? (ii) When
is it necessary to take into account prior probability distributions for the
source parameters? (iii) When is the signal-to-noise ratio high enough to
believe the Fisher-matrix result? In addition, I provide general expressions
for the higher-order, beyond--Fisher-matrix terms in the 1/SNR expansions for
the expected parameter accuracies.Comment: 24 pages, 3 figures, previously known as "A User Manual for the
Fisher Information Matrix"; final, corrected PRD versio
Structural basis for antibiotic transport and inhibition in PepT2
The uptake and elimination of beta-lactam antibiotics in the human body are facilitated by the proton-coupled peptide transporters PepT1 (SLC15A1) and PepT2 (SLC15A2). The mechanism by which SLC15 family transporters recognize and discriminate between different drug classes and dietary peptides remains unclear, hampering efforts to improve antibiotic pharmacokinetics through targeted drug design and delivery. Here, we present cryo-EM structures of the proton-coupled peptide transporter, PepT2 from Rattus norvegicus, in complex with the widely used beta-lactam antibiotics cefadroxil, amoxicillin and cloxacillin. Our structures, combined with pharmacophore mapping, molecular dynamics simulations and biochemical assays, establish the mechanism of beta-lactam antibiotic recognition and the important role of protonation in drug binding and transport
Addressing challenges of heterogeneous tumor treatment through bispecific protein-mediated pretargeted drug delivery
Tumors are frequently characterized by genomically and phenotypically distinct cancer cell subpopulations within the same tumor or between tumor lesions, a phenomenon termed tumor heterogeneity. These diverse cancer cell populations pose a major challenge to targeted delivery of diagnostic and/or therapeutic agents, as the conventional approach of conjugating individual ligands to nanoparticles is often unable to facilitate intracellular delivery to the full spectrum of cancer cells present in a given tumor lesion or patient. As a result, many cancers are only partially suppressed, leading to eventual tumor regrowth and/or the development of drug-resistant tumors. Pretargeting (multistep targeting) approaches involving the administration of 1) a cocktail of bispecific proteins that can collectively bind to the entirety of a mixed tumor population followed by 2) nanoparticles containing therapeutic and/or diagnostic agents that can bind to the bispecific proteins accumulated on the surface of target cells offer the potential to overcome many of the challenges associated with drug delivery to heterogeneous tumors. Despite its considerable success in improving the efficacy of radioimmunotherapy, the pretargeting strategy remains underexplored for a majority of nanoparticle therapeutic applications, especially for targeted delivery to heterogeneous tumors. In this review, we will present concepts in tumor heterogeneity, the shortcomings of conventional targeted systems, lessons learned from pretargeted radioimmunotherapy, and important considerations for harnessing the pretargeting strategy to improve nanoparticle delivery to heterogeneous tumors
Perturbative nonequilibrium dynamics of phase transitions in an expanding universe
A complete set of Feynman rules is derived, which permits a perturbative
description of the nonequilibrium dynamics of a symmetry-breaking phase
transition in theory in an expanding universe. In contrast to a
naive expansion in powers of the coupling constant, this approximation scheme
provides for (a) a description of the nonequilibrium state in terms of its own
finite-width quasiparticle excitations, thus correctly incorporating
dissipative effects in low-order calculations, and (b) the emergence from a
symmetric initial state of a final state exhibiting the properties of
spontaneous symmetry breaking, while maintaining the constraint . Earlier work on dissipative perturbation theory and spontaneous symmetry
breaking in Minkowski spacetime is reviewed. The central problem addressed is
the construction of a perturbative approximation scheme which treats the
initial symmetric state in terms of the field , while the state that
emerges at later times is treated in terms of a field , linearly related
to . The connection between early and late times involves an infinite
sequence of composite propagators. Explicit one-loop calculations are given of
the gap equations that determine quasiparticle masses and of the equation of
motion for and the renormalization of these equations is
described. The perturbation series needed to describe the symmetric and
broken-symmetry states are not equivalent, and this leads to ambiguities
intrinsic to any perturbative approach. These ambiguities are discussed in
detail and a systematic procedure for matching the two approximations is
described.Comment: 22 pages, using RevTeX. 6 figures. Submitted to Physical Review
THE HIGGS-YUKAWA MODEL IN CURVED SPACETIME
The Higgs-Yukawa model in curved spacetime (renormalizable in the usual
sense) is considered near the critical point, employing the --expansion
and renormalization group techniques. By making use of the equivalence of this
model with the standard NJL model, the effective potential in the linear
curvature approach is calculated and the dynamically generated fermionic mass
is found. A numerical study of chiral symmetry breaking by curvature effects is
presented.Comment: LaTeX, 9 pages, 1 uu-figur
Why humans kill animals and why we cannot avoid it
Killing animals has been a ubiquitous human behaviour throughout history, yet it is becoming increasingly controversial and criticised in some parts of contemporary human society. Here we review 10 primary reasons why humans kill animals, discuss the necessity (or not) of these forms of killing, and describe the global ecological context for human killing of animals. Humans historically and currently kill animals either directly or indirectly for the following reasons: (1) wild harvest or food acquisition, (2) human health and safety, (3) agriculture and aquaculture, (4) urbanisation and industrialisation, (5) invasive, overabundant or nuisance wildlife control, (6) threatened species conservation, (7) recreation, sport or entertainment, (8) mercy or compassion, (9) cultural and religious practice, and (10) research,education and testing. While the necessity of some forms of animal killing is debatable and further depends on individual values, we emphasise that several of these forms of animal killing are a necessary component of our inescapable involvement in a single, functioning, finite, global food web. We conclude that humans (and all other animals) cannot live in a way that does not require animal killing either directly or indirectly, but humans can modify some of these killing behaviours in ways that improve the welfare of animals while they are alive, or to reduce animal suffering whenever they must be killed. We encourage a constructive dialogue that (1) accepts and permits human participation in one enormous global food web dependent on animal killing and (2) focuses on animal welfare and environmental sustainability. Doing so will improve the lives of both wild and domestic animals to a greater extent than efforts to avoid, prohibit or vilify human animal-killing behaviour. Animal ethics Conservation biology Culling Factory farmingpublishedVersio
Why humans kill animals and why we cannot avoid it
Killing animals has been a ubiquitous human behaviour throughout history, yet it is becoming increasingly controversial and criticised in some parts of contemporary human society. Here we review 10 primary reasons why humans kill animals, discuss the necessity (or not) of these forms of killing, and describe the global ecological context for human killing of animals. Humans historically and currently kill animals either directly or indirectly for the following reasons: (1) wild harvest or food acquisition, (2) human health and safety, (3) agriculture and aquaculture, (4) urbanisation and industrialisation, (5) invasive, overabundant or nuisance wildlife control, (6) threatened species conservation, (7) recreation, sport or entertainment, (8) mercy or compassion, (9) cultural and religious practice, and (10) research,education and testing. While the necessity of some forms of animal killing is debatable and further depends on individual values, we emphasise that several of these forms of animal killing are a necessary component of our inescapable involvement in a single, functioning, finite, global food web. We conclude that humans (and all other animals) cannot live in a way that does not require animal killing either directly or indirectly, but humans can modify some of these killing behaviours in ways that improve the welfare of animals while they are alive, or to reduce animal suffering whenever they must be killed. We encourage a constructive dialogue that (1) accepts and permits human participation in one enormous global food web dependent on animal killing and (2) focuses on animal welfare and environmental sustainability. Doing so will improve the lives of both wild and domestic animals to a greater extent than efforts to avoid, prohibit or vilify human animal-killing behaviour. Animal ethics Conservation biology Culling Factory farmingpublishedVersio
Convergence of the Optimized Delta Expansion for the Connected Vacuum Amplitude -- Anharmonic Oscillator
The convergence of the linear expansion for the connected generating
functional of the quantum anharmonic oscillator is proved. Using an
order-dependent scaling for the variational parameter , we show that
the expansion converges to the exact result with an error proportional to
.Comment: LaTeX, 14 pages, 4 figures
Can Asymptotic Series Resolve the Problems of Inflation?
We discuss a cosmological scenario in which inflation is driven by a
potential which is motivated by an effective Lagrangian approach to gravity. We
exploit the recent arguments \cite{ARZ} that an effective Lagrangian
which, by definition, contains operators of arbitrary dimensionality is in
general not a convergent but rather an asymptotic series with factorially
growing coefficients. This behavior of the effective Lagrangian might be
responsible for the resolution of the cosmological constant problem. We argue
that the same behavior of the potential gives a natural realization of the
inflationary scenario.Comment: 12 pages, uses Late
- …