177,596 research outputs found

    “An ethnographic seduction”: how qualitative research and Agent-based models can benefit each other

    Get PDF
    We provide a general analytical framework for empirically informed agent-based simulations. This methodology provides present-day agent-based models with a sound and proper insight as to the behavior of social agents — an insight that statistical data often fall short of providing at least at a micro level and for hidden and sensitive populations. In the other direction, simulations can provide qualitative researchers in sociology, anthropology and other fields with valuable tools for: (a) testing the consistency and pushing the boundaries, of specific theoretical frameworks; (b) replicating and generalizing results; (c) providing a platform for cross-disciplinary validation of results

    Singular Gaussian Measures in Detection Theory

    Get PDF
    No abstract availabl

    Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters

    Full text link
    Markov decision processes (MDPs) are a popular model for performance analysis and optimization of stochastic systems. The parameters of stochastic behavior of MDPs are estimates from empirical observations of a system; their values are not known precisely. Different types of MDPs with uncertain, imprecise or bounded transition rates or probabilities and rewards exist in the literature. Commonly, analysis of models with uncertainties amounts to searching for the most robust policy which means that the goal is to generate a policy with the greatest lower bound on performance (or, symmetrically, the lowest upper bound on costs). However, hedging against an unlikely worst case may lead to losses in other situations. In general, one is interested in policies that behave well in all situations which results in a multi-objective view on decision making. In this paper, we consider policies for the expected discounted reward measure of MDPs with uncertain parameters. In particular, the approach is defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best and average case performances of a policy are analyzed simultaneously, which yields a multi-scenario multi-objective optimization problem. The paper presents and evaluates approaches to compute the pure Pareto optimal policies in the value vector space.Comment: 9 pages, 5 figures, preprint for VALUETOOLS 201

    Distinguishing coherent atomic processes using wave mixing

    Full text link
    We are able to clearly distinguish the processes responsible for enhanced low-intensity atomic Kerr nonlinearity, namely coherent population trapping and coherent population oscillations in experiments performed on the Rb D1 line, where one or the other process dominates under appropriate conditions. The potential of this new approach based on wave mixing for probing coherent atomic media is discussed. It allows the new spectral components to be detected with sub-kHz resolution, which is well below the laser linewidth limit. Spatial selectivity and enhanced sensitivity make this method useful for testing dilute cold atomic samples.Comment: 9 pages, 5 figure

    Extended Cognition, The New Mechanists’ Mutual Manipulability Criterion, and The Challenge of Trivial Extendedness

    Get PDF
    Many authors have turned their attention to the notion of constitution to determine whether the hypothesis of extended cognition (EC) is true. One common strategy is to make sense of constitution in terms of the new mechanists’ mutual manipulability account (MM). In this paper I will show that MM is insufficient. The Challenge of Trivial Extendedness arises due to the fact that mechanisms for cognitive behaviors are extended in a way that should not count as verifying EC. This challenge can be met by adding a necessary condition: cognitive constituents satisfy MM and they are what I call behavior unspecific

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Using genetic algorithms to generate test sequences for complex timed systems

    Get PDF
    The generation of test data for state based specifications is a computationally expensive process. This problem is magnified if we consider that time con- straints have to be taken into account to govern the transitions of the studied system. The main goal of this paper is to introduce a complete methodology, sup- ported by tools, that addresses this issue by represent- ing the test data generation problem as an optimisa- tion problem. We use heuristics to generate test cases. In order to assess the suitability of our approach we consider two different case studies: a communication protocol and the scientific application BIPS3D. We give details concerning how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs outperform random search and seem to scale well as the problem size increases. It is worth to mention that we use a very simple fitness function that can be eas- ily adapted to be used with other evolutionary search techniques

    Joint Probabilistic Data Association-Feedback Particle Filter for Multiple Target Tracking Applications

    Full text link
    This paper introduces a novel feedback-control based particle filter for the solution of the filtering problem with data association uncertainty. The particle filter is referred to as the joint probabilistic data association-feedback particle filter (JPDA-FPF). The JPDA-FPF is based on the feedback particle filter introduced in our earlier papers. The remarkable conclusion of our paper is that the JPDA-FPF algorithm retains the innovation error-based feedback structure of the feedback particle filter, even with data association uncertainty in the general nonlinear case. The theoretical results are illustrated with the aid of two numerical example problems drawn from multiple target tracking applications.Comment: In Proc. of the 2012 American Control Conferenc
    corecore