101 research outputs found

    Organic and conventional tomato cropping systems.

    Get PDF
    Among several alternative agricultural systems have been developed, organic agriculture has deserved increasing interest from. The objective of this paper was comparing both organic (OS) and conventional (CS) tomato cropping systems for varieties Débora and Santa Clara, through an interdisciplinary study. The experiment was set up in a randomized blocks design with six replicates, in a dystrophic Ultisol plots measuring 25 ´ 17 m. Cropping procedures followed by either local conventional or organic growers practices recommendations. Fertilization in the OS was done with organic compost, single superphosphate, dolomitic limes (5L, 60 g, and 60 g per pit), and sprayed twice a week with biofertilizer. Fertilization in the CS was done with 200 g 4-14-8 (NPK) per pit and, after planting, 30 g N, 33 g K and 10.5 g P per pit; from 52 days after planting forth, plants were sprayed once a week with foliar fertilizer. In the CS, a blend of insecticides, fungicides and miticides was sprayed twice a week, after planting. In the OS, extracts of black pepper, garlic, and Eucalyptus; Bordeaux mixture, and biofertilizer, were applied twice a week to control diseases and pests. Tomato spotted wilt was the most important disease in the OS, resulting in smaller plant development, number of flower clusters and yield. In the CS, the disease was kept under control, and the population of thrips, the virus vector, occurred at lower levels than in the OS. Variety Santa Clara presented greater incidence of the viral disease, and for this reason had a poorer performance than 'Débora', especially in the OS. Occurrence of Liriomyza spp. was significantly smaller in the OS, possibly because of the greater frequency of Chrysoperla. The CS had smaller incidence of leaf spots caused by Septoria lycopersici and Xanthomonas vesicatoria. However, early blight and fruit rot caused by Alternaria solani occurred in larger numbers. No differences were observed with regard to the communities of fungi and bacteria in the phylloplane, and to the occurrence of weeds

    Analyzing the role of industrial sector's electricity consumption, prices, and GDP: A modified empirical evidence from Pakistan

    Get PDF
    Electricity usage plays a vital role in raising the massive growth in the economy; also, the industrial sector is the key factor of overall energy demand closely related to the economy. The study aims to contribute in two ways. First, the Vector Error Correction Model (VECM) estimates electricity consumption in Pakistan during 1970-2018 to find the relationship between electricity consumption, price, and real gross domestic product. Second, decomposing the overall impact of an unexpected shock on each variableos Dynamic Variance Decomposition Technique applied. The empirical analysis shows that the factors are co-integrated. The results also indicate the long-run relationship between electricity consumption, price, and real gross domestic product in the industrial sector. Further, the VECM analysis responses are also confirmed by the variance decomposition method. The findings confirm the potential of the industrial sector. We propose that formalized and proper assurance of electricity needs and demands at a reasonable price can boost the local industry's confidence and attract foreign investors. However, a strong governance structure should be extended to the public sector to ensure policies that priorities the distribution of energy to businesses for development

    An improved method for measuring muon energy using the truncated mean of dE/dx

    Full text link
    The measurement of muon energy is critical for many analyses in large Cherenkov detectors, particularly those that involve separating extraterrestrial neutrinos from the atmospheric neutrino background. Muon energy has traditionally been determined by measuring the specific energy loss (dE/dx) along the muon's path and relating the dE/dx to the muon energy. Because high-energy muons (E_mu > 1 TeV) lose energy randomly, the spread in dE/dx values is quite large, leading to a typical energy resolution of 0.29 in log10(E_mu) for a muon observed over a 1 km path length in the IceCube detector. In this paper, we present an improved method that uses a truncated mean and other techniques to determine the muon energy. The muon track is divided into separate segments with individual dE/dx values. The elimination of segments with the highest dE/dx results in an overall dE/dx that is more closely correlated to the muon energy. This method results in an energy resolution of 0.22 in log10(E_mu), which gives a 26% improvement. This technique is applicable to any large water or ice detector and potentially to large scintillator or liquid argon detectors.Comment: 12 pages, 16 figure

    All-particle cosmic ray energy spectrum measured with 26 IceTop stations

    Full text link
    We report on a measurement of the cosmic ray energy spectrum with the IceTop air shower array, the surface component of the IceCube Neutrino Observatory at the South Pole. The data used in this analysis were taken between June and October, 2007, with 26 surface stations operational at that time, corresponding to about one third of the final array. The fiducial area used in this analysis was 0.122 km^2. The analysis investigated the energy spectrum from 1 to 100 PeV measured for three different zenith angle ranges between 0{\deg} and 46{\deg}. Because of the isotropy of cosmic rays in this energy range the spectra from all zenith angle intervals have to agree. The cosmic-ray energy spectrum was determined under different assumptions on the primary mass composition. Good agreement of spectra in the three zenith angle ranges was found for the assumption of pure proton and a simple two-component model. For zenith angles {\theta} < 30{\deg}, where the mass dependence is smallest, the knee in the cosmic ray energy spectrum was observed between 3.5 and 4.32 PeV, depending on composition assumption. Spectral indices above the knee range from -3.08 to -3.11 depending on primary mass composition assumption. Moreover, an indication of a flattening of the spectrum above 22 PeV were observed.Comment: 38 pages, 17 figure

    Detecting a stochastic gravitational wave background with the Laser Interferometer Space Antenna

    Get PDF
    The random superposition of many weak sources will produce a stochastic background of gravitational waves that may dominate the response of the LISA (Laser Interferometer Space Antenna) gravitational wave observatory. Unless something can be done to distinguish between a stochastic background and detector noise, the two will combine to form an effective noise floor for the detector. Two methods have been proposed to solve this problem. The first is to cross-correlate the output of two independent interferometers. The second is an ingenious scheme for monitoring the instrument noise by operating LISA as a Sagnac interferometer. Here we derive the optimal orbital alignment for cross-correlating a pair of LISA detectors, and provide the first analytic derivation of the Sagnac sensitivity curve.Comment: 9 pages, 11 figures. Significant changes to the noise estimate

    Lorentz breaking Effective Field Theory and observational tests

    Full text link
    Analogue models of gravity have provided an experimentally realizable test field for our ideas on quantum field theory in curved spacetimes but they have also inspired the investigation of possible departures from exact Lorentz invariance at microscopic scales. In this role they have joined, and sometime anticipated, several quantum gravity models characterized by Lorentz breaking phenomenology. A crucial difference between these speculations and other ones associated to quantum gravity scenarios, is the possibility to carry out observational and experimental tests which have nowadays led to a broad range of constraints on departures from Lorentz invariance. We shall review here the effective field theory approach to Lorentz breaking in the matter sector, present the constraints provided by the available observations and finally discuss the implications of the persisting uncertainty on the composition of the ultra high energy cosmic rays for the constraints on the higher order, analogue gravity inspired, Lorentz violations.Comment: 47 pages, 4 figures. Lecture Notes for the IX SIGRAV School on "Analogue Gravity", Como (Italy), May 2011. V.3. Typo corrected, references adde

    Multimessenger astronomy with the Einstein Telescope

    Full text link
    Gravitational waves (GWs) are expected to play a crucial role in the development of multimessenger astrophysics. The combination of GW observations with other astrophysical triggers, such as from gamma-ray and X-ray satellites, optical/radio telescopes, and neutrino detectors allows us to decipher science that would otherwise be inaccessible. In this paper, we provide a broad review from the multimessenger perspective of the science reach offered by the third generation interferometric GW detectors and by the Einstein Telescope (ET) in particular. We focus on cosmic transients, and base our estimates on the results obtained by ET's predecessors GEO, LIGO, and Virgo.Comment: 26 pages. 3 figures. Special issue of GRG on the Einstein Telescope. Minor corrections include

    Graph Neural Networks for low-energy event classification & reconstruction in IceCube

    Get PDF
    IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe

    Neutrino oscillation studies with IceCube-DeepCore

    Get PDF
    AbstractIceCube, a gigaton-scale neutrino detector located at the South Pole, was primarily designed to search for astrophysical neutrinos with energies of PeV and higher. This goal has been achieved with the detection of the highest energy neutrinos to date. At the other end of the energy spectrum, the DeepCore extension lowers the energy threshold of the detector to approximately 10 GeV and opens the door for oscillation studies using atmospheric neutrinos. An analysis of the disappearance of these neutrinos has been completed, with the results produced being complementary with dedicated oscillation experiments. Following a review of the detector principle and performance, the method used to make these calculations, as well as the results, is detailed. Finally, the future prospects of IceCube-DeepCore and the next generation of neutrino experiments at the South Pole (IceCube-Gen2, specifically the PINGU sub-detector) are briefly discussed
    corecore