21 research outputs found

    Suitability of pesticide risk indicators for less developed countries: a comparison

    Get PDF
    Pesticide risk indicators provide simple support in the assessment of environmental and health risks from pesticide use, and can therefore inform policies to foster a sustainable interaction of agriculture with the environment. For their relative simplicity, indicators may be particularly useful under conditions of limited data availability and resources, such as in Less Developed Countries (LDCs). However, indicator complexity can vary significantly, in particular between those that rely on an exposure–toxicity ratio (ETR) and those that do not. In addition, pesticide risk indicators are usually developed for Western contexts, which might cause incorrect estimation in LDCs. This study investigated the appropriateness of seven pesticide risk indicators for use in LDCs, with reference to smallholding agriculture in Colombia. Seven farm-level indicators, among which 3 relied on an ETR (POCER, EPRIP, PIRI) and 4 on a non-ETR approach (EIQ, PestScreen, OHRI, Dosemeci et al., 2002), were calculated and then compared by means of the Spearman rank correlation test. Indicators were also compared with respect to key indicator characteristics, i.e. user friendliness and ability to represent the system under study. The comparison of the indicators in terms of the total environmental risk suggests that the indicators not relying on an ETR approach cannot be used as a reliable proxy for more complex, i.e. ETR, indicators. ETR indicators, when user-friendly, show a comparative advantage over non-ETR in best combining the need for a relatively simple tool to be used in contexts of limited data availability and resources, and for a reliable estimation of environmental risk. Non-ETR indicators remain useful and accessible tools to discriminate between different pesticides prior to application. Concerning the human health risk, simple algorithms seem more appropriate for assessing human health risk in LDCs. However, further research on health risk indicators and their validation under LDC conditions is needed

    A record of Holocene glacial and oceanographic variability in Neny Fjord, Antarctic Peninsula

    No full text
    Analyses of a 12 m marine sediment core from Neny Fjord, Marguerite Bay, Antarctic Peninsula (68.2571°S, 66.9617°W), yield a high-resolution record of Holocene climate variability. The sediments preserve signals of past glacial and marine environments and offer a unique insight into atmospheric and oceanic forcings on the western Antarctic Peninsula climate. Dating of basal material reveals that deglaciation of the fjord occurred prior to 9040 cal. yr BP and provides a minimum constraint on the timing of deglaciation close to the southern Antarctic Peninsula ice-divide. Continuous deposition of ice-distal sediments and seasonally open-water diatoms indicates that the site has not been over-ridden by glacier ice during the Holocene. A facies of sand-rich material offers the only evidence of a localized glacier advance, during the mid Holocene. Statistical analysis of diatom assemblage data reveals several climatic episodes of varying magnitude and duration. These include an early-Holocene warm period (~9000 and ~7000 cal. yr BP), potentially associated with influx of Circumpolar Deep Water onto the continental shelf and coinciding with widespread glacial retreat and Holocene collapse of the George VI Ice Shelf. The mid-Holocene (~7000 to ~2800 cal. yr BP) sediments are characterized by diatom assemblages indicative of less pervasive sea-ice cover and prolonged growing seasons with evidence of increased meltwater discharge from ~4000 cal. yr BP. The youngest sediments (~2800 cal. yr BP to present) contain a record that is consistent with the widely documented ‘neoglacial’ period followed by an abrupt reversal and climate amelioration from sometime after ~200 cal. yr BP
    corecore