136,278 research outputs found

    Engaging local communities in aquatic resources research and activities: a technical manual

    Get PDF
    This document is part of a series of 5 technical manuals produced by the Challenge Program Project CP34 “Improved fisheries productivity and management in tropical reservoirs”. The objective of this technical manual is to relay the field experience of a group of scientists who have worked extensively in small fisheries in sub-Sahara Africa and Asia and lay out a series of simple and pragmatic pointers on how to establish and run initiatives for community catch assessment. The manual relies in particular on practical experience gained implementing Project 34 of the Challenge Programme on Water and Food: Improved Fisheries Productivity and Management in Tropical Reservoirs. (PDF contains 26 pages

    A simulation-based assessment of the bias produced when using averages from small DHS clusters as contextual variables in multilevel models

    Get PDF
    There is much interest these days in the importance of community institutions and resources for individual mortality and fertility. DHS data may seem to be a valuable source for such multilevel analysis. For example, researchers may consider including in their models the average education within the sample (cluster) of approximately 25 women interviewed in each primary sampling unit (PSU). However, this is only a proxy for the theoretically more interesting average among all women in the PSU, and, in principle, the estimated effect of the sample mean may differ markedly from the effect of the latter variable. Fortunately, simulation experiments show that the bias actually is fairly small - less than 14% - when education effects on first birth timing are estimated from DHS surveys in sub-Saharan Africa. If other data are used, or if the focus is turned to other independent variables than education, the bias may, of course, be very different. In some situations, it may be even smaller; in others, it may be unacceptably large. That depends on the size of the clusters, and on how the independent variables are distributed within and across communities. Some general advice is provided.average, bias, clustering, contextual, DHS, measurement error, multilevel, simulation, size, small

    Heterogeneity of Research Results: A New Perspective From Which to Assess and Promote Progress in Psychological Science

    Get PDF
    Heterogeneity emerges when multiple close or conceptual replications on the same subject produce results that vary more than expected from the sampling error. Here we argue that unexplained heterogeneity reflects a lack of coherence between the concepts applied and data observed and therefore a lack of understanding of the subject matter. Typical levels of heterogeneity thus offer a useful but neglected perspective on the levels of understanding achieved in psychological science. Focusing on continuous outcome variables, we surveyed heterogeneity in 150 meta-analyses from cognitive, organizational, and social psychology and 57 multiple close replications. Heterogeneity proved to be very high in meta-analyses, with powerful moderators being conspicuously absent. Population effects in the average meta-analysis vary from small to very large for reasons that are typically not understood. In contrast, heterogeneity was moderate in close replications. A newly identified relationship between heterogeneity and effect size allowed us to make predictions about expected heterogeneity levels. We discuss important implications for the formulation and evaluation of theories in psychology. On the basis of insights from the history and philosophy of science, we argue that the reduction of heterogeneity is important for progress in psychology and its practical applications, and we suggest changes to our collective research practice toward this end

    Large Angle Satellite Attitude Maneuvers

    Get PDF
    Two methods are proposed for performing large angle reorientation maneuvers. The first method is based upon Euler's rotation theorem; an arbitrary reorientation is ideally accomplished by rotating the spacecraft about a line which is fixed in both the body and in space. This scheme has been found to be best suited for the case in which the initial and desired attitude states have small angular velocities. The second scheme is more general in that a general class of transition trajectories is introduced which, in principle, allows transfer between arbitrary orientation and angular velocity states. The method generates transition maneuvers in which the uncontrolled (free) initial and final states are matched in orientation and angular velocity. The forced transition trajectory is obtained by using a weighted average of the unforced forward integration of the initial state and the unforced backward integration of the desired state. The current effort is centered around practical validation of this second class of maneuvers. Of particular concern is enforcement of given control system constraints and methods for suboptimization by proper selection of maneuver initiation and termination times. Analogous reorientation strategies which force smooth transition in angular momentum and/or rotational energy are under consideration

    Why do we need 14C inter-comparisons?: The Glasgow 14C inter-comparison series, a reflection over 30 years

    Get PDF
    Radiocarbon measurement is a well-established, routinely used, yet complex series of inter-linked procedures. The degree of sample pre-treatment varies considerably depending on the material, the methods of processing pre-treated material vary across laboratories and the detection of 14C at low levels remains challenging. As in any complex measurement process, the questions of quality assurance and quality control become paramount, both internally, i.e. within a laboratory and externally, across laboratories. The issue of comparability of measurements (and thus bias, accuracy and precision of measurement) from the diverse laboratories is one that has been the focus of considerable attention for some time, both within the 14C community and the wider user communities. In the early years of the technique when there was only a small number of laboratories in existence, inter-comparisons would function on an ad hoc basis, usually involving small numbers of laboratories (e.g.Otlet et al, 1980). However, as more laboratories were set-up and the detection methods were further developed (e.g. new AMS facilities), the need for more systematic work was recognised. The international efforts to create a global calibration curve also requires the use of data generated by different laboratories at different times, so that evidence of laboratory offsets is needed to inform curve formation. As a result of these factors, but also as part of general good laboratory practice, including laboratory benchmarking and quality assurance, the 14C community has undertaken a wide-scale, far-reaching and evolving programme of global inter-comparisons, to the benefit of laboratories and users alike. This paper looks at some of that history and considers what has been achieved in the past 30 years

    How can we model subsurface stormflow at the catchment scale if we cannot measure it?

    Get PDF
    Subsurface stormflow (SSF) can be a dominant run‐off generation process in humid mountainous catchments (e.g., Bachmair & Weiler, 2011; Blume & van Meerveld, 2015; Chifflard, Didszun, & Zepp, 2008). Generally, SSF develops in structured soils where bedrock or a less permeable soil layer is overlaid by a more permeable soil layer and vertically percolating water is deflected, at least partially, in a lateral downslope direction due to the slope inclination. SSF can also occur when groundwater levels rise into more permeable soil layers and water flows laterally through the more permeable layers to the stream (“transmissivity feedback mechanism”; Bishop, Grip, & O'Neill, 1990). The different existing terms for SSF in the hydrological literature such as shallow subsurface run‐off, interflow, lateral flow, or soil water flow reflects the different underlying process concepts developed in various experimental studies in different environments by using different experimental approaches at different spatial and temporal scales (Weiler, McDonnell, Tromp‐van Meerveld, & Uchida, 2005). Intersite comparisons and the extraction of general rules for SSF generation and its controlling factors are still lacking, which hampers the development of appropriate approaches for modelling SSF. But appropriate prediction of SSF is essential due to its clear influence on run‐off generation at the catchment scale (e.g., Chifflard et al., 2010; Zillgens, Merz, Kirnbauer, & Tilch, 2005), on the formation of floods (e.g., Markart et al., 2013, 2015) and on the transport of nutrients or pollutants from the hillslopes into surface water bodies (Zhao, Tang, Zhao, Wang, & Tang, 2013). However, a precise simulation of SSF in models requires an accurate process understanding including, knowledge about water pathways, residence times, magnitude of water fluxes, or the spatial origin of SSF within a given catchment because such factors determine the transport of subsurface water and solutes to the stream. But due to its occurrence in the subsurface and its spatial and temporal variability, determining and quantifying the processes generating SSF is a challenging task as they cannot be observed directly. Therefore, it is logical to ask whether we can really model SSF correctly if we cannot measure it well enough on the scale of interest (Figure 1). This commentary reflects critically on whether current experimental concepts and modelling approaches are sufficient to predict the contribution of SSF to the run‐off at the catchment scale. This applies in particular to the underlying processes, controlling factors, modelling approaches, research gaps, and innovative strategies to trace SSF across different scales

    Stellar dust production and composition in the Magellanic Clouds

    Get PDF
    The dust reservoir in the interstellar medium of a galaxy is constantly being replenished by dust formed in the stellar winds of evolved stars. Due to their vicinity, nearby irregular dwarf galaxies the Magellanic Clouds provide an opportunity to obtain a global picture of the dust production in galaxies. The Small and Large Magellanic Clouds have been mapped with the Spitzer Space Telescope from 3.6 to 160 {\mu}m, and these wavelengths are especially suitable to study thermal dust emission. In addition, a large number of individual evolved stars have been targeted for 5-40 {\mu}m spectroscopy, revealing the mineralogy of these sources. Here I present an overview on the work done on determining the total dust production rate in the Large and Small Magellanic Clouds, as well as a first attempt at revealing the global composition of the freshly produced stardust.Comment: accepted for publication by Earth, Planets & Spac

    Engaging local communities in aquatic resources research and activities: a technical manual

    Get PDF
    This document is part of a series of 5 technical manuals produced by the Challenge Program Project CP34 ÎImproved fisheries productivity and management in tropical reservoirsö. The objective of this technical manual is to relay the field experience of a group of scientists who have worked extensively in small fisheries in sub-Sahara Africa and Asia and lay out a series of simple and pragmatic pointers on how to establish and run initiatives for community catch assessment. The manual relies in particular on practical experience gained implementing Project 34 of the Challenge Programme on Water and Food: Improved Fisheries Productivity and Management in Tropical Reservoirs.Research, Fishery data

    The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions

    Full text link
    Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available datasets of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations acquired and stored by different modalities. Given this diversity we had to apply different acquisition and cleaning methods and developed semi-automatic workflows utilizing specifically trained neural networks. The final dataset consists of 10015 dermatoscopic images which are released as a training set for academic machine learning purposes and are publicly available through the ISIC archive. This benchmark dataset can be used for machine learning and for comparisons with human experts. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions. More than 50% of lesions have been confirmed by pathology, while the ground truth for the rest of the cases was either follow-up, expert consensus, or confirmation by in-vivo confocal microscopy
    • 

    corecore