278,159 research outputs found

    Grid Global Behavior Prediction

    Get PDF
    Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach

    The Rabin index of parity games

    Get PDF
    We study the descriptive complexity of parity games by taking into account the coloring of their game graphs whilst ignoring their ownership structure. Colored game graphs are identified if they determine the same winning regions and strategies, for all ownership structures of nodes. The Rabin index of a parity game is the minimum of the maximal color taken over all equivalent coloring functions. We show that deciding whether the Rabin index is at least k is in PTIME for k=1 but NP-hard for all fixed k > 1. We present an EXPTIME algorithm that computes the Rabin index by simplifying its input coloring function. When replacing simple cycle with cycle detection in that algorithm, its output over-approximates the Rabin index in polynomial time. Experimental results show that this approximation yields good values in practice.Comment: In Proceedings GandALF 2013, arXiv:1307.416

    An experimental paradigm for team decision processes

    Get PDF
    The study of distributed information processing and decision making is presently hampered by two factors: (1) The inherent complexity of the mathematical formulation of decentralized problems has prevented the development of models that could be used to predict performance in a distributed environment; and (2) The lack of comprehensive scientific empirical data on human team decision making has hindered the development of significant descriptive models. As a part of a comprehensive effort to find a new framework for multihuman decision making problems, a novel experimental research paradigm was developed involving human terms in decision making tasks. Attempts to construct parts of an integrated model with ideas from queueing networks, team theory, distributed estimation and decentralized resource management are described

    Distribution of Mutual Information from Complete and Incomplete Data

    Full text link
    Mutual information is widely used, in a descriptive way, to measure the stochastic dependence of categorical random variables. In order to address questions such as the reliability of the descriptive value, one must consider sample-to-population inferential approaches. This paper deals with the posterior distribution of mutual information, as obtained in a Bayesian framework by a second-order Dirichlet prior distribution. The exact analytical expression for the mean, and analytical approximations for the variance, skewness and kurtosis are derived. These approximations have a guaranteed accuracy level of the order O(1/n^3), where n is the sample size. Leading order approximations for the mean and the variance are derived in the case of incomplete samples. The derived analytical expressions allow the distribution of mutual information to be approximated reliably and quickly. In fact, the derived expressions can be computed with the same order of complexity needed for descriptive mutual information. This makes the distribution of mutual information become a concrete alternative to descriptive mutual information in many applications which would benefit from moving to the inductive side. Some of these prospective applications are discussed, and one of them, namely feature selection, is shown to perform significantly better when inductive mutual information is used.Comment: 26 pages, LaTeX, 5 figures, 4 table

    Behavorial Effects in Individual Decisions of Network Formation

    Get PDF
    Network formation constitutes an important part of many social and economic processes, but relatively little is known about how individuals make their linking decisions. This article provides an experimental investigation of behavioral effects in individual decisions of network formation. Our findings demonstrate that individuals systematically simplify more complex components of network payoff in their linking decisions. Specifically, they focus on only part of the normative payoff, namely on their own direct payoff and tend to ignore indirect payoff and payoff for others in the network. Additionally, individuals use descriptive behavioral traits of link choice alternatives to guide their choices. They are sensitive to whether an alternative involves link deletion or creation and whether it concerns an isolated or a central node. Furthermore, we find that complexity of one type can moderate individuals’ dealing with a complex feature of another type. These behavioral effects have important implications for researchers and managers working in areas that involve network formation.Economics (Jel: A)

    Matching bias in syllogistic reasoning: Evidence for a dual-process account from response times and confidence ratings

    Get PDF
    We examined matching bias in syllogistic reasoning by analysing response times, confidence ratings, and individual differences. Roberts’ (2005) “negations paradigm” was used to generate conflict between the surface features of problems and the logical status of conclusions. The experiment replicated matching bias effects in conclusion evaluation (Stupple & Waterhouse, 2009), revealing increased processing times for matching/logic “conflict problems”. Results paralleled chronometric evidence from the belief bias paradigm indicating that logic/belief conflict problems take longer to process than non-conflict problems (Stupple, Ball, Evans, & Kamal-Smith, 2011). Individuals’ response times for conflict problems also showed patterns of association with the degree of overall normative responding. Acceptance rates, response times, metacognitive confidence judgements, and individual differences all converged in supporting dual-process theory. This is noteworthy because dual-process predictions about heuristic/analytic conflict in syllogistic reasoning generalised from the belief bias paradigm to a situation where matching features of conclusions, rather than beliefs, were set in opposition to logic

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model\u27s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model\u27s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model\u27s descriptive complexity – also in bit
    corecore