1,295 research outputs found

    Automatic real-time interpolation of radiation hazards: prototype and system architecture considerations

    Get PDF
    Detecting and monitoring the development of radioactive releases in the atmosphere is important. In many European countries monitoring networks have been established to perform this task. In the Netherlands the National Radioactivity Monitoring network (NRM) was installed. Currently, point maps are used to interpret the data from the NRM. Automatically generating maps in realtime would improve the interpretation of the data by giving the user a clear overview of the present radiological situation and provide an estimate of the radioactivity level at unmeasured locations. In this paper we present a prototype system that automatically generates real-time maps of radioactivity levels and presents results in an interoperable way through a Web Map Service. The system defines a first step towards a emergency management system and is suited primarily for data without large outliers. The automatic interpolation is done using universal kriging in combination with an automatic variogram fitting procedure. The focus is on mathematical and operational issues and on architectural considerations on how to improve the interoperability and portability of the prototype system

    Quality of life, big data and the power of statistics

    Get PDF
    The digital era has opened up new possibilities for data-driven research. This paper discusses big data challenges in environmental monitoring and reflects on the use of statisticalmethodsintacklingthesechallengesforimprovingthequalityoflifeincities

    Do Monetary Incentives Influence Users’ Behavior in Participatory Sensing?

    Get PDF
    Participatory sensing combines the powerful sensing capabilities of current mobile devices with the mobility and intelligence of human beings, and as such has to potential to collect various types of information at a high spatial and temporal resolution. Success, however, entirely relies on the willingness and motivation of the users to carry out sensing tasks, and thus it is essential to incentivize the users’ active participation. In this article, we first present an open, generic participatory sensing framework (Citizense) which aims to make participatory sensing more accessible, flexible and transparent. Within the context of this framework we adopt three monetary incentive mechanisms which prioritize the fairness for the users while maintaining their simplicity and portability: fixed micro-payment, variable micro-payment and lottery. This incentive-enabled framework is then deployed on a large scale, real-world case study, where 230 participants were exposed to 44 different sensing campaigns. By randomly distributing incentive mechanisms among participants and a subset of campaigns, we study the behaviors of the overall population as well as the behaviors of different subgroups divided by demographic information with respect to the various incentive mechanisms. As a result of our study, we can conclude that (1) in general, monetary incentives work to improve participation rate; (2) for the overall population, a general descending order in terms of effectiveness of the incentive mechanisms can be established: fixed micro-payment first, then lottery-style payout and finally variable micro-payment. These two conclusions hold for all the demographic subgroups, even though different different internal distances between the incentive mechanisms are observed for different subgroups. Finally, a negative correlation between age and participation rate was found: older participants contribute less compared to their younger peers

    Comparing adaptive and fixed bandwidth-based kernel density estimates in spatial cancer epidemiology

    Get PDF
    Background: Monitoring spatial disease risk (e.g. identifying risk areas) is of great relevance in public health research, especially in cancer epidemiology. A common strategy uses case-control studies and estimates a spatial relative risk function (sRRF) via kernel density estimation (KDE). This study was set up to evaluate the sRRF estimation methods, comparing fixed with adaptive bandwidth-based KDE, and how they were able to detect ‘risk areas’ with case data from a population-based cancer registry. Methods: The sRRF were estimated within a defined area, using locational information on incident cancer cases and on a spatial sample of controls, drawn from a high-resolution population grid recognized as underestimating the resident population in urban centers. The spatial extensions of these areas with underestimated resident population were quantified with population reference data and used in this study as ‘true risk areas’. Sensitivity and specificity analyses were conducted by spatial overlay of the ‘true risk areas’ and the significant (α=.05) p-contour lines obtained from the sRRF. Results: We observed that the fixed bandwidth-based sRRF was distinguished by a conservative behavior in identifying these urban ‘risk areas’, that is, a reduced sensitivity but increased specificity due to oversmoothing as compared to the adaptive risk estimator. In contrast, the latter appeared more competitive through variance stabilization, resulting in a higher sensitivity, while the specificity was equal as compared to the fixed risk estimator. Halving the originally determined bandwidths led to a simultaneous improvement of sensitivity and specificity of the adaptive sRRF, while the specificity was reduced for the fixed estimator. Conclusion: The fixed risk estimator contrasts with an oversmoothing tendency in urban areas, while overestimating the risk in rural areas. The use of an adaptive bandwidth regime attenuated this pattern, but led in general to a higher false positive rate, because, in our study design, the majority of true risk areas were located in urban areas. However, there is a strong need for further optimizing the bandwidth selection methods, especially for the adaptive sRRF.<br

    Uncertainty propagation between web services – a case study using the eHabitat WPS to identify unique ecosystems

    Get PDF
    Protected areas (PAs) are designed to protect ecosystems and their associated species against anthropogenic threats. When assessing the importance of the current network of PAs, and when considering new areas which should be protected, one of the criteria is the uniqueness of the ecosystems found inside the existing or planned PA when compared to other parks. As a helping tool for park managers and potential funders, eHabitat has been designed using the Open Geospatial Consortiums (OGC) Web Processing Service (WPS) interface specification. It allows end-users to compute, using different data and models, the likelihood of finding ecosystems with similar properties, as well as the potential changes these areas are exposed to according to different climate change scenarios. The most important input parameters, typically thematic geospatial “indicator layers“ characterizing the ecosystem, are provided to the WPS as references using standardised web services or catalogues. This allows for a virtually infinite number of combinations to describe these ecosystems. However, the layers used can range from geophysical data captured through remote sensing to socio-economical indicators. eHabitat is therefore exposed to a broad range of different types and levels of uncertainties. Assessing these uncertainties, and as an additional component further propagating them when potentially included in a chain of model services, is a key aspect in the context of the Model Web. The use of the Uncertainty Markup Language (UncertML) as developed within the UncertWeb project to promote interoperability between data and models with quantified uncertainty and different approaches for encoding and visualising uncertainty information will be presented. Retrieving feedback of intermediate processing results like input layer histograms or variability using supplied uncertainty information will be discussed.JRC.H.5 - Land Resources Managemen

    Detecting cancer clusters in a regional population with local cluster tests and Bayesian smoothing methods: a simulation study

    Full text link
    Background: There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods: Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results: With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion: High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage.<br

    Bayesian networks for raster data (BayNeRD): plausible reasoning from observations

    Get PDF
    This paper describes the basis functioning and implementation of a computer-aided Bayesian Network (BN) method that is able to incorporate experts’ knowledge for the benefit of remote sensing applications and other raster data analyses: Bayesian Network for Raster Data (BayNeRD). Using a case study of soybean mapping in Mato Grosso State, Brazil, BayNeRD was tested to evaluate its capability to support the understanding of a complex phenomenon through plausible reasoning based on data observation. Observations made upon Crop Enhanced Index (CEI) values for the current and previous crop years, soil type, terrain slope, and distance to the nearest road and water body were used to calculate the probability of soybean presence for the entire Mato Grosso State, showing strong adherence to the official data. CEI values were the most influencial variables in the calculated probability of soybean presence, stating the potential of remote sensing as a source of data. Moreover, the overall accuracy of over 91% confirmed the high accuracy of the thematic map derived from the calculated probability values. BayNeRD allows the expert to model the relationship among several observed variables, outputs variable importance information, handles incomplete and disparate forms of data, and offers a basis for plausible reasoning from observations. The BayNeRD algorithm has been implemented in R software and can be found on the internet. \ an
    corecore