163,516 research outputs found

    Visualizing dimensionality reduction of systems biology data

    Full text link
    One of the challenges in analyzing high-dimensional expression data is the detection of important biological signals. A common approach is to apply a dimension reduction method, such as principal component analysis. Typically, after application of such a method the data is projected and visualized in the new coordinate system, using scatter plots or profile plots. These methods provide good results if the data have certain properties which become visible in the new coordinate system and which were hard to detect in the original coordinate system. Often however, the application of only one method does not suffice to capture all important signals. Therefore several methods addressing different aspects of the data need to be applied. We have developed a framework for linear and non-linear dimension reduction methods within our visual analytics pipeline SpRay. This includes measures that assist the interpretation of the factorization result. Different visualizations of these measures can be combined with functional annotations that support the interpretation of the results. We show an application to high-resolution time series microarray data in the antibiotic-producing organism Streptomyces coelicolor as well as to microarray data measuring expression of cells with normal karyotype and cells with trisomies of human chromosomes 13 and 21

    Control strategies for road risk mitigation in kinetic traffic modelling

    Full text link
    In this paper we present a Boltzmann-type kinetic approach to the modelling of road traffic, which includes control strategies at the level of microscopic binary interactions aimed at the mitigation of speed-dependent road risk factors. Such a description is meant to mimic a system of driver-assist vehicles, which by responding locally to the actions of their drivers can impact on the large-scale traffic dynamics, including those related to the collective road risk and safety

    Semi-automatic selection of summary statistics for ABC model choice

    Full text link
    A central statistical goal is to choose between alternative explanatory models of data. In many modern applications, such as population genetics, it is not possible to apply standard methods based on evaluating the likelihood functions of the models, as these are numerically intractable. Approximate Bayesian computation (ABC) is a commonly used alternative for such situations. ABC simulates data x for many parameter values under each model, which is compared to the observed data xobs. More weight is placed on models under which S(x) is close to S(xobs), where S maps data to a vector of summary statistics. Previous work has shown the choice of S is crucial to the efficiency and accuracy of ABC. This paper provides a method to select good summary statistics for model choice. It uses a preliminary step, simulating many x values from all models and fitting regressions to this with the model as response. The resulting model weight estimators are used as S in an ABC analysis. Theoretical results are given to justify this as approximating low dimensional sufficient statistics. A substantive application is presented: choosing between competing coalescent models of demographic growth for Campylobacter jejuni in New Zealand using multi-locus sequence typing data

    Computing wildfire behaviour metrics from CFD simulation data

    Get PDF
    In this article, we demonstrate a new post-processing methodology which can be used to analyse CFD wildfire simulation outputs in a model-independent manner. CFD models produce a great deal of quantitative output but require additional post-processing to calculate commonly used wildfire behaviour metrics. Such post-processing has so far been model specific. Our method takes advantage of the 3D renderings that are a common output from such models and provides a means of calculating important fire metrics such as rate of spread and flame height using image processing techniques. This approach can be applied similarly to different models and to real world fire behaviour datasets, thus providing a new framework for model validation. Furthermore, obtained information is not limited to average values over the complete domain but spatially and temporally explicit metric distributions are provided. This feature supports posterior statistical analyses, ultimately contributing to more detailed and rigorous fire behaviour studies.Peer ReviewedPostprint (published version

    MANAGING KNOWLEDGE AND DATA FOR A BETTER DECISION IN PUBLIC ADMINISTRATION

    Get PDF
    In the current context, the society is dominated by the rapid development of computer networks and the integration of services and facilities offered by the Internet environment at the organizational level. The success of an organization depends largely on the quality and quantity of information it has available to develop quickly decisions able to meet the current needs. The need for a collaborative environment within the central administration leads to the unification of resources and instruments around the Center of Government, to increase both the quality and efficiency of decision - making, especially reducing the time spent with decision - making, and upgrading the decision – making act.administration, strategy, decision, complex systems, management, infrastructure, e-government, information society, government platform.
    • …
    corecore