82 research outputs found

    Collating and validating indigenous and local knowledge to apply multiple knowledge systems to an environmental challenge: A case-study of pollinators in India

    Get PDF
    There is an important role for indigenous and local knowledge in a Multiple Evidence Base to make decisions about the use of biodiversity and its management. This is important both to ensure that the knowledge base is complete (comprising both scientific and local knowledge) and to facilitate participation in the decision making process. We present a novel method to gather evidence in which we used a peer-to-peer validation process among farmers that we suggest is analogous to scientific peer review. We used a case-study approach to trial the process focussing on pollinator decline in India. Pollinator decline is a critical challenge for which there is a growing evidence base, however, this is not the case world–wide. In the state of Orissa, India, there are no validated scientific studies that record historical pollinator abundance, therefore local knowledge can contribute substantially and may indeed be the principle component of the available knowledge base. Our aim was to collate and validate local knowledge in preparation for integration with scientific knowledge from other regions, for the purpose of producing a Multiple Evidence Base to develop conservation strategies for pollinators. Farmers reported that vegetable crop yields were declining in many areas of Orissa and that the abundance of important insect crop pollinators has declined sharply across the study area in the last 10–25 years, particularly Apis cerana, Amegilla sp. and Xylocopa sp. Key pollinators for commonly grown crops were identified; both Apris cerana and Xylocopa sp. were ranked highly as pollinators by farmer participants. Crop yield declines were attributed to soil quality, water management, pests, climate change, overuse of chemical inputs and lack of agronomic expertise. Pollinator declines were attributed to the quantity and number of pesticides used. Farmers suggested that fewer pesticides, more natural habitat and the introduction of hives would support pollinator populations. This process of knowledge creation was supported by participants, which led to this paper being co-authored by both scientists and farmers

    The VLT-FLAMES Tarantula Survey: XXX. Red stragglers in the clusters Hodge 301 and SL 639

    Get PDF
    Aims: We estimate physical parameters for the late-type massive stars observed as part of the VLT-FLAMES Tarantula Survey (VFTS) in the 30 Doradus region of the Large Magellanic Cloud (LMC). Methods: The observational sample comprises 20 candidate red supergiants (RSGs) which are the reddest ((B − V) > 1 mag) and brightest (V < 16 mag) objects in the VFTS. We use optical and near-infrared (near-IR) photometry to estimate their temperatures and luminosities, and introduce the luminosity–age diagram to estimate their ages. Results: We derive physical parameters for our targets, including temperatures from a new calibration of (J − Ks)0 colour for luminous cool stars in the LMC, luminosities from their J-band magnitudes (thence radii), and ages from comparisons with current evolutionary models. We show that interstellar extinction is a significant factor for our targets, highlighting the need to take it into account in the analysis of the physical parameters of RSGs. We find that some of the candidate RSGs could be massive AGB stars. The apparent ages of the RSGs in the Hodge 301 and SL 639 clusters show a significant spread (12–24 Myr). We also apply our approach to the RSG population of the relatively nearby NGC 2100 cluster, finding a similarly large spread. Conclusions We argue that the effects of mass transfer in binaries may lead to more massive and luminous RSGs (which we call “red stragglers”) than expected from single-star evolution, and that the true cluster ages correspond to the upper limit of the estimated RSG ages. In this way, the RSGs can serve as a new and potentially reliable age tracer in young star clusters. The corresponding analysis yields ages of 24-3+5 Myr for Hodge 301, 22-5+6 Myr for SL 639, and 23-2+4 Myr for NGC 2100

    Identification and reconstruction of low-energy electrons in the ProtoDUNE-SP detector

    Full text link
    Measurements of electrons from Îœe\nu_e interactions are crucial for the Deep Underground Neutrino Experiment (DUNE) neutrino oscillation program, as well as searches for physics beyond the standard model, supernova neutrino detection, and solar neutrino measurements. This article describes the selection and reconstruction of low-energy (Michel) electrons in the ProtoDUNE-SP detector. ProtoDUNE-SP is one of the prototypes for the DUNE far detector, built and operated at CERN as a charged particle test beam experiment. A sample of low-energy electrons produced by the decay of cosmic muons is selected with a purity of 95%. This sample is used to calibrate the low-energy electron energy scale with two techniques. An electron energy calibration based on a cosmic ray muon sample uses calibration constants derived from measured and simulated cosmic ray muon events. Another calibration technique makes use of the theoretically well-understood Michel electron energy spectrum to convert reconstructed charge to electron energy. In addition, the effects of detector response to low-energy electron energy scale and its resolution including readout electronics threshold effects are quantified. Finally, the relation between the theoretical and reconstructed low-energy electron energy spectrum is derived and the energy resolution is characterized. The low-energy electron selection presented here accounts for about 75% of the total electron deposited energy. After the addition of lost energy using a Monte Carlo simulation, the energy resolution improves from about 40% to 25% at 50~MeV. These results are used to validate the expected capabilities of the DUNE far detector to reconstruct low-energy electrons.Comment: 19 pages, 10 figure

    Low-Energy Physics in Neutrino LArTPCs

    Get PDF
    In this white paper, we outline some of the scientific opportunities and challenges related to detection and reconstruction of low-energy (less than 100 MeV) signatures in liquid argon time-projection chamber (LArTPC) detectors. Key takeaways are summarized as follows. 1) LArTPCs have unique sensitivity to a range of physics and astrophysics signatures via detection of event features at and below the few tens of MeV range. 2) Low-energy signatures are an integral part of GeV-scale accelerator neutrino interaction final states, and their reconstruction can enhance the oscillation physics sensitivities of LArTPC experiments. 3) BSM signals from accelerator and natural sources also generate diverse signatures in the low-energy range, and reconstruction of these signatures can increase the breadth of BSM scenarios accessible in LArTPC-based searches. 4) Neutrino interaction cross sections and other nuclear physics processes in argon relevant to sub-hundred-MeV LArTPC signatures are poorly understood. Improved theory and experimental measurements are needed. Pion decay-at-rest sources and charged particle and neutron test beams are ideal facilities for experimentally improving this understanding. 5) There are specific calibration needs in the low-energy range, as well as specific needs for control and understanding of radiological and cosmogenic backgrounds. 6) Novel ideas for future LArTPC technology that enhance low-energy capabilities should be explored. These include novel charge enhancement and readout systems, enhanced photon detection, low radioactivity argon, and xenon doping. 7) Low-energy signatures, whether steady-state or part of a supernova burst or larger GeV-scale event topology, have specific triggering, DAQ and reconstruction requirements that must be addressed outside the scope of conventional GeV-scale data collection and analysis pathways

    Impact of cross-section uncertainties on supernova neutrino spectral parameter fitting in the Deep Underground Neutrino Experiment

    Get PDF
    A primary goal of the upcoming Deep Underground Neutrino Experiment (DUNE) is to measure the O(10)\mathcal{O}(10) MeV neutrinos produced by a Galactic core-collapse supernova if one should occur during the lifetime of the experiment. The liquid-argon-based detectors planned for DUNE are expected to be uniquely sensitive to the Îœe\nu_e component of the supernova flux, enabling a wide variety of physics and astrophysics measurements. A key requirement for a correct interpretation of these measurements is a good understanding of the energy-dependent total cross section σ(EÎœ)\sigma(E_\nu) for charged-current Îœe\nu_e absorption on argon. In the context of a simulated extraction of supernova Îœe\nu_e spectral parameters from a toy analysis, we investigate the impact of σ(EÎœ)\sigma(E_\nu) modeling uncertainties on DUNE's supernova neutrino physics sensitivity for the first time. We find that the currently large theoretical uncertainties on σ(EÎœ)\sigma(E_\nu) must be substantially reduced before the Îœe\nu_e flux parameters can be extracted reliably: in the absence of external constraints, a measurement of the integrated neutrino luminosity with less than 10\% bias with DUNE requires σ(EÎœ)\sigma(E_\nu) to be known to about 5%. The neutrino spectral shape parameters can be known to better than 10% for a 20% uncertainty on the cross-section scale, although they will be sensitive to uncertainties on the shape of σ(EÎœ)\sigma(E_\nu). A direct measurement of low-energy Îœe\nu_e-argon scattering would be invaluable for improving the theoretical precision to the needed level.Comment: 25 pages, 21 figure

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Individuals responses to economic cycles: Organizational relevance and a multilevel theoretical integration

    Get PDF

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Murine TMJ Loading Causes Increased Proliferation and Chondrocyte Maturation

    No full text
    The purpose of this study was to examine the effects of forced mouth opening on murine mandibular condylar head remodeling. We hypothesized that forced mouth opening would cause an anabolic response in the mandibular condylar cartilage. Six-week-old female C57BL/6 mice were divided into 3 groups: (1) control, (2) 0.25 N, and (3) 0.50 N of forced mouth opening. Gene expression, micro-CT, and proliferation were analyzed. 0.5 N of forced mouth opening caused a significant increase in mRNA expression of Pthrp, Sox9, and Collagen2a1, a significant increase in proliferation, and a significant increase in trabecular spacing in the subchondral bone, whereas 0.25 N of forced mouth opening did not cause any significant changes in any of the parameters examined. Forced mouth opening causes an increase in the expression of chondrocyte maturation markers and an increase in subchondral trabecular spacing
    • 

    corecore