2,569 research outputs found
Measurement of overall insecticidal effects in experimental hut trials
BACKGROUND: The 'overall insecticidal effect' is a key measure used to evaluate public health pesticides for indoor use in experimental hut trials. It depends on the proportion of mosquitoes that are killed out of those that enter the treated hut, intrinsic mortality in the control hut, and the ratio of mosquitoes entering the treatment hut to those entering the control hut. This paper critically examines the way the effect is defined, and discusses how it can be used to infer effectiveness of intervention programmes.
FINDINGS: The overall insecticidal effect, as defined by the World Health Organization in 2006, can be negative when deterrence from entering the treated hut is high, even if all mosquitoes that enter are killed, wrongly suggesting that the insecticide enhances mosquito survival. Also in the absence of deterrence, even if the insecticide kills all mosquitoes in the treatment hut, the insecticidal effect is less than 100%, unless intrinsic mortality is nil. A proposed alternative definition for the measurement of the overall insecticidal effect has the desirable range of 0 to 1 (100%), provided mortality among non-repelled mosquitoes in the treated hut is less than the corresponding mortality in the control hut. This definition can be built upon to formulate the coverage-dependent insecticidal effectiveness of an intervention programme. Coverage-dependent population protection against feeding can be formulated similarly.
CONCLUSIONS: This paper shows that the 2006 recommended quantity for measuring the overall insecticidal effect is problematic, and proposes an alternative quantity with more desirable propertie
Reducing combinatorial uncertainties: A new technique based on MT2 variables
We propose a new method to resolve combinatorial ambiguities in hadron
collider events involving two invisible particles in the final state. This
method is based on the kinematic variable MT2 and on the MT2-assisted-on-shell
reconstruction of invisible momenta, that are reformulated as `test' variables
Ti of the correct combination against the incorrect ones. We show how the
efficiency of the single Ti in providing the correct answer can be
systematically improved by combining the different Ti and/or by introducing
cuts on suitable, combination-insensitive kinematic variables. We illustrate
our whole approach in the specific example of top anti-top production, followed
by a leptonic decay of the W on both sides. However, by construction, our
method is also directly applicable to many topologies of interest for new
physics, in particular events producing a pair of undetected particles, that
are potential dark-matter candidates. We finally emphasize that our method is
apt to several generalizations, that we outline in the last sections of the
paper.Comment: 1+23 pages, 8 figures. Main changes in v3: (1) discussion at the end
of sec. 2 improved; (2) added sec. 4.2 about the method's dependence on mass
information. Matches journal versio
Ghrelin causes hyperphagia and obesity in rats.
Ghrelin, a circulating growth hormone–releasing pep-tide derived from the stomach, stimulates food intake. The lowest systemically effective orexigenic dose of ghrelin was investigated and the resulting plasma ghre-lin concentration was compared with that during fast-ing. The lowest dose of ghrelin that produced a significant stimulation of feeding after intraperitoneal injection was 1 nmol. The plasma ghrelin concentration after intraperitoneal injection of 1 nmol of ghrelin (2.83 0.13 pmol/ml at 60 min postinjection) was not significantly different from that occurring after a 24-h fast (2.79 0.32 pmol/ml). After microinjection into defined hypothalamic sites, ghrelin (30 pmol) stimu-lated food intake most markedly in the arcuate nucleus (Arc) (0–1 h food intake, 427 43 % of control; P <
General analysis of signals with two leptons and missing energy at the Large Hadron Collider
A signal of two leptons and missing energy is challenging to analyze at the
Large Hadron Collider (LHC) since it offers only few kinematical handles. This
signature generally arises from pair production of heavy charged particles
which each decay into a lepton and a weakly interacting stable particle. Here
this class of processes is analyzed with minimal model assumptions by
considering all possible combinations of spin 0, 1/2 or 1, and of weak
iso-singlets, -doublets or -triplets for the new particles. Adding to existing
work on mass and spin measurements, two new variables for spin determination
and an asymmetry for the determination of the couplings of the new particles
are introduced. It is shown that these observables allow one to independently
determine the spin and the couplings of the new particles, except for a few
cases that turn out to be indistinguishable at the LHC. These findings are
corroborated by results of an alternative analysis strategy based on an
automated likelihood test.Comment: 18 pages, 3 figures, LaTe
Does \u2018bigger\u2019mean \u2018better\u2019? Pitfalls and shortcuts associated with big data for social research
\u2018Big data is here to stay.\u2019 This key statement has a double value: is an assumption as well as the reason why a theoretical reflection is needed. Furthermore, Big data is something that is gaining visibility and success in social sciences even, overcoming the division between humanities and computer sciences. In this contribution some considerations on the presence and the certain persistence of Big data as a socio-technical assemblage will be outlined. Therefore, the intriguing opportunities for social research linked to such interaction between practices and technological development will be developed. However, despite a promissory rhetoric, fostered by several scholars since the birth of Big data as a labelled concept, some risks are just around the corner. The claims for the methodological power of bigger and bigger datasets, as well as increasing speed in analysis and data collection, are creating a real hype in social research. Peculiar attention is needed in order to avoid some pitfalls. These risks will be analysed for what concerns the validity of the research results \u2018obtained through Big data. After a pars distruens, this contribution will conclude with a pars construens; assuming the previous critiques, a mixed methods research design approach will be described as a general proposal with the objective of stimulating a debate on the integration of Big data in complex research projecting
Toxicity of Three Insecticides to Lysiphlebus fabarum, a Parasitoid of the Black Bean Aphid, Aphis fabae
The toxicity of three insecticides to Lysiphlebus fabarum (Marshall) (Hymenoptera: Braconidae: Aphidiinae), a parasitoid of Aphis fabae Scopoli (Hemiptera: Aphididae), was investigated using IOBC/wprs protocols. Abamectin 1.8 EC, imidacloprid 350 SC, and pymetrozine 25 WP were tested under laboratory conditions at recommended field rates. Immature stages of the parasitoid were exposed to materials by briefly dipping mummified aphids into insecticide solutions/suspensions or water (controls). Abamectin, imidacloprid, and pymetrozine caused 44.8, 58.5, and 14.5% mortality of mummies, respectively. Insecticides were also applied to broad bean foliage until run-off using a hand sprayer and the contact toxicity of residues was investigated after 1, 5, 16 and 30 day periods of outdoor weathering by caging adult wasps on treated plants for 24 h. One day-old residues of abamectin, imidacloprid, and pymetrozine produced 52.5, 90.0 and 57.0% mortality, respectively, and 5 day-old residues produced 28.1, 77.0 and 18.6% mortality. Sixteen day-old residues produced 8.8, 22.4 and 13.6% mortality, whereas 30 day-old residues produced 0.0, 3.2 and 1.1% mortality, respectively. On the basis of these results, abamectin and pymetrozine were classified as short-lived compounds (Class A) and imidacloprid as a slightly persistent compound (Class B)
Recommended from our members
Z boson production in Pb+Pb collisions at √Snn = 5.02 TeV measured by the ATLAS experiment
The production yield of Z bosons is measured in the electron and muon decay channels in Pb+Pb collisions at √S = 5.02 TeV with the ATLAS detector. Data from the 2015 LHC run corresponding to an integrated luminosity of 0.49 nb are used for the analysis. The Z boson yield, normalised by the total number of minimum-bias events and the mean nuclear thickness function, is measured as a function of dilepton rapidity and event centrality. The measurements in Pb+Pb collisions are compared with similar measurements made in proton-proton collisions at the same centre-of-mass energy. The nuclear modification factor is found to be consistent with unity for all centrality intervals. The results are compared with theoretical predictions obtained at next-to-leading order using nucleon and nuclear parton distribution functions. The normalised Z boson yields in Pb+Pb collisions lie 1-3σ above the predictions. The nuclear modification factor measured as a function of rapidity agrees with unity and is consistent with a next-to-leading-order QCD calculation including the isospin effect. nn -
Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb−1 of pp collisions at s=13TeV with the ATLAS experiment
A search for flavour-changing neutral current (FCNC) events via the coupling of a top quark, a photon, and an up or charm quark is presented using 81 fb−1 of proton–proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. Events with a photon, an electron or muon, a b-tagged jet, and missing transverse momentum are selected. A neural network based on kinematic variables differentiates between events from signal and background processes. The data are consistent with the background-only hypothesis, and limits are set on the strength of the tqγ coupling in an effective field theory. These are also interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tuγ coupling of 36 fb (78 fb) and on the branching ratio for t→γu of 2.8×10−5 (6.1×10−5). In addition, they are interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tcγ coupling of 40 fb (33 fb) and on the branching ratio for t→γc of 22×10−5 (18×10−5)
- …