1,894 research outputs found

    Values and uncertainties in climate prediction, revisited

    Get PDF
    Philosophers continue to debate both the actual and the ideal roles of values in science. Recently, Eric Winsberg has offered a novel, model-based challenge to those who argue that the internal workings of science can and should be kept free from the influence of social values. He contends that model-based assignments of probability to hypotheses about future climate change are unavoidably influenced by social values. I raise two objections to Winsberg’s argument, neither of which can wholly undermine its conclusion but each of which suggests that his argument exaggerates the influence of social values on estimates of uncertainty in climate prediction. I then show how a more traditional challenge to the value-free ideal seems tailor-made for the climate context

    Multifractal nature of plume structure in high Rayleigh number convection

    Full text link
    The geometrically different plan forms of near wall plume structure in turbulent natural convection, visualised by driving the convection using concentration differences across a membrane, are shown to have a common multifractal spectrum of singularities for Rayleigh numbers in the range 1010−101110^{10}- 10^{11} at Schmidt number of 602. The scaling is seen for a length scale range of 252^5 and is independent of the Rayleigh number, the flux, the strength and nature of the large scale flow, and the aspect ratio. Similar scaling is observed for the plume structures obtained in the presence of a weak flow across the membrane. This common non trivial spatial scaling is proposed to be due to the same underlying generating process of the near wall plume structures.Comment: 11pages, 16 figures Accepted in Journal of Fluid mechanics. Revised version. Added two more figures and related discussion on suggestion of referee

    A hierarchical image segmentation algorithm based on an observation scale

    Get PDF
    International audienceHierarchical image segmentation provides a region-oriented scale-space, i.e., a set of image segmentations at different detail levels in which the segmentations at finer levels are nested with respect to those at coarser levels. Most image segmentation algorithms, such as region merging algorithms, rely on a criterion for merging that does not lead to a hierarchy. In addition, for image segmentation, the tuning of the parameters can be difficult. In this work, we propose a hierarchical graph based image segmentation relying on a criterion popularized by Felzenszwalb and Huttenlocher. Quantitative and qualitative assessments of the method on Berkeley image database shows efficiency, ease of use and robustness of our method

    Extracting Hierarchies of Search Tasks & Subtasks via a Bayesian Nonparametric Approach

    Get PDF
    A significant amount of search queries originate from some real world information need or tasks. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions, for better recommendations, for satisfaction prediction, and for improved personalization in terms of tasks. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks \& subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task/subtask hierarchies.Comment: 10 pages. Accepted at SIGIR 2017 as a full pape

    Doctor of Philosophy

    Get PDF
    dissertationThe goal of this dissertation is to improve flood risk management by enhancing the computational capability of two-dimensional models and incorporating data and parameter uncertainty to more accurately represent flood risk. Improvement of computational performance is accomplished by using the Graphics Processing Unit (GPU) approach, programmed in NVIDIA's Compute Unified Development Architecture (CUDA), to create a new two-dimensional hydrodynamic model, Flood2D-GPU. The model, based on the shallow water equations, is designed to execute simulations faster than the same code programmed using a serial approach (i.e., using a Central Processing Unit (CPU)). Testing the code against an identical CPU-based version demonstrated the improved computational efficiency of the GPU-based version (approximate speedup of more than 80 times). Given the substantial computational efficiency of Flood2D-GPU, a new Monte Carlo based flood risk modeling framework was created. The framework developed operates by performing many Flood2D-GPU simulations using randomly sampled model parameters and input variables. The Monte Carlo flood risk modeling framework is demonstrated in this dissertation by simulating the flood risk associated with a 1% annual probability flood event occurring in the Swannanoa River in Buncombe County near Asheville, North Carolina. The Monte Carlo approach is able to represent a wide range of possible scenarios, thus leading to the identification of areas outside a single simulation inundation extent that are susceptible to flood hazards. Further, the single simulation results underestimated the degree of flood hazard for the case study region when compared to the flood hazard map produced by the Monte Carlo approach. The Monte Carlo flood risk modeling framework is also used to determine the relative benefits of flood management alternatives for flood risk reduction. The objective of the analysis is to investigate the possibility of identifying specific annual exceedance probability flood events that will have greater benefits in terms of annualized flood risk reduction compared to an arbitrarily-selected discrete annual probability event. To test the hypothesis, a study was conducted on the Swannanoa River to determine the distribution of annualized risk as a function of average annual probability. Simulations of samples of flow rate from a continuous flow distribution provided the range of annual probability events necessary. The results showed a variation in annualized risk as a function of annual probability. And as hypothesized, a maximum annualized risk reduction could be identified for a specified annual probability. For the Swannanoa case study, the continuous flow distribution suggested targeting flood proofing to control the 12% exceedance probability event to maximize the reduction of annualized risk. This suggests that the arbitrary use of a specified risk of 1% exceedance may not in some cases be the most efficient allocation of resources to reduce annualized risk

    Fears and realisations of employment insecurity

    Get PDF
    We investigate the validity of subjective data expectations of job loss and on the probability of re-employment consequent on job loss, by examining associations between expectations and realisations. We find that subjective expectations data reveal private information about subsequent job loss, the expectations data perform better with numerical descriptors than with ordinal verbal descriptors. On average, employees overestimate the chance of losing their job; while they underestimate the difficulty of finding another job as good as the currently-held one. We recommend that survey items on employment insecurity should be explicit about each risk investigation, and utilise a cardinal probability scale with discrete numerical descriptors

    Using Expert Models in Human Reliability Analysis - A Dependence Assessment Method Based on Fuzzy Logic

    No full text
    International audienceIn human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES-based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis

    Mapping Topographic Structure in White Matter Pathways with Level Set Trees

    Full text link
    Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees---which provide a concise representation of the hierarchical mode structure of probability density functions---offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N=30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber tracks and an efficient segmentation of the tracks that has empirical accuracy comparable to standard nonparametric clustering methods. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output
    • …
    corecore