144 research outputs found

    Mean-Payoff Optimization in Continuous-Time Markov Chains with Parametric Alarms

    Full text link
    Continuous-time Markov chains with alarms (ACTMCs) allow for alarm events that can be non-exponentially distributed. Within parametric ACTMCs, the parameters of alarm-event distributions are not given explicitly and can be subject of parameter synthesis. An algorithm solving the ε\varepsilon-optimal parameter synthesis problem for parametric ACTMCs with long-run average optimization objectives is presented. Our approach is based on reduction of the problem to finding long-run average optimal strategies in semi-Markov decision processes (semi-MDPs) and sufficient discretization of parameter (i.e., action) space. Since the set of actions in the discretized semi-MDP can be very large, a straightforward approach based on explicit action-space construction fails to solve even simple instances of the problem. The presented algorithm uses an enhanced policy iteration on symbolic representations of the action space. The soundness of the algorithm is established for parametric ACTMCs with alarm-event distributions satisfying four mild assumptions that are shown to hold for uniform, Dirac and Weibull distributions in particular, but are satisfied for many other distributions as well. An experimental implementation shows that the symbolic technique substantially improves the efficiency of the synthesis algorithm and allows to solve instances of realistic size.Comment: This article is a full version of a paper accepted to the Conference on Quantitative Evaluation of SysTems (QEST) 201

    A Bayesian approach to robust identification: application to fault detection

    Get PDF
    In the Control Engineering field, the so-called Robust Identification techniques deal with the problem of obtaining not only a nominal model of the plant, but also an estimate of the uncertainty associated to the nominal model. Such model of uncertainty is typically characterized as a region in the parameter space or as an uncertainty band around the frequency response of the nominal model. Uncertainty models have been widely used in the design of robust controllers and, recently, their use in model-based fault detection procedures is increasing. In this later case, consistency between new measurements and the uncertainty region is checked. When an inconsistency is found, the existence of a fault is decided. There exist two main approaches to the modeling of model uncertainty: the deterministic/worst case methods and the stochastic/probabilistic methods. At present, there are a number of different methods, e.g., model error modeling, set-membership identification and non-stationary stochastic embedding. In this dissertation we summarize the main procedures and illustrate their results by means of several examples of the literature. As contribution we propose a Bayesian methodology to solve the robust identification problem. The approach is highly unifying since many robust identification techniques can be interpreted as particular cases of the Bayesian framework. Also, the methodology can deal with non-linear structures such as the ones derived from the use of observers. The obtained Bayesian uncertainty models are used to detect faults in a quadruple-tank process and in a three-bladed wind turbine

    Finanční aplikace podmíněných očekávání

    Get PDF
    Import 14/02/2017This dissertation examines different financial applications of some conditional expectation estimators. In the first application, we provide some theoretical motivations behind the use of the moving average rule as one of the most popular trading tools among practitioners. In particular, we examine the conditional probability of the price increments and we study how this probability changes over time. In the second application, we present different approaches to evaluate the presence of the arbitrage opportunities in the option market. In particular, we investigate empirically the well-known put-call parity no-arbitrage relation and the state price density. We first measure the violation of the put-call parity as the difference in implied volatilities between call and put options. Furthermore, we propose alternative approaches to estimate the state price density under the classical hypothesis of the Black and Scholes model. In the third application, we investigate the implications for portfolio theory of using conditional expectation estimators. First, we focus on the approximation of the conditional expectation within large-scale portfolio selection problems. In this context, we propose a new consistent multivariate kernel estimator to approximate the conditional expectation. We show how the new estimator can be used for the return approximation of large-scale portfolio problems. Moreover, the proposed estimator optimizes the bandwidth selection of kernel type estimators, solving the classical selection problem. Second, we propose new performance measures based on the conditional expectation that takes into account the heavy tails of the return distributions. Third, we deal with the portfolio selection problem from the point of view of different non-satiable investors, namely risk-averse and risk-seeking investors. In particular, using a well-known ordering classification, we first identify different definitions of returns based on the investors’ preferences. The new definitions of returns are based on the conditional expected value between the random wealth assessed at different times. Finally, for each problem, we propose an empirical application of several admissible portfolio optimization problems using the US stock market.154 - Katedra financívyhově

    IST Austria Thesis

    Get PDF
    This dissertation concerns the automatic verification of probabilistic systems and programs with arrays by statistical and logical methods. Although statistical and logical methods are different in nature, we show that they can be successfully combined for system analysis. In the first part of the dissertation we present a new statistical algorithm for the verification of probabilistic systems with respect to unbounded properties, including linear temporal logic. Our algorithm often performs faster than the previous approaches, and at the same time requires less information about the system. In addition, our method can be generalized to unbounded quantitative properties such as mean-payoff bounds. In the second part, we introduce two techniques for comparing probabilistic systems. Probabilistic systems are typically compared using the notion of equivalence, which requires the systems to have the equal probability of all behaviors. However, this notion is often too strict, since probabilities are typically only empirically estimated, and any imprecision may break the relation between processes. On the one hand, we propose to replace the Boolean notion of equivalence by a quantitative distance of similarity. For this purpose, we introduce a statistical framework for estimating distances between Markov chains based on their simulation runs, and we investigate which distances can be approximated in our framework. On the other hand, we propose to compare systems with respect to a new qualitative logic, which expresses that behaviors occur with probability one or a positive probability. This qualitative analysis is robust with respect to modeling errors and applicable to many domains. In the last part, we present a new quantifier-free logic for integer arrays, which allows us to express counting. Counting properties are prevalent in array-manipulating programs, however they cannot be expressed in the quantified fragments of the theory of arrays. We present a decision procedure for our logic, and provide several complexity results

    Contribuciones a la Seguridad del Aprendizaje Automático

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Matemáticas, leída el 05-11-2020Machine learning (ML) applications have experienced an unprecedented growth over the last two decades. However, the ever increasing adoption of ML methodologies has revealed important security issues. Among these, vulnerabilities to adversarial examples, data instances targeted at fooling ML algorithms, are especially important. Examples abound. For instance, it is relatively easy to fool a spam detector simply misspelling spam words. Obfuscation of malware code can make it seem legitimate. Simply adding stickers to a stop sign could make an autonomous vehicle classify it as a merge sign. Consequences could be catastrophic. Indeed, ML is designed to work in stationary and benign environments. However, in certain scenarios, the presence of adversaries that actively manipulate input datato fool ML systems to attain benefits break such stationarity requirements. Training and operation conditions are not identical anymore. This creates a whole new class of security vulnerabilities that ML systems may face and a new desirable property: adversarial robustness. If we are to trust operations based on ML outputs, it becomes essential that learning systems are robust to such adversarial manipulations...Las aplicaciones del aprendizaje automático o machine learning (ML) han experimentado un crecimiento sin precedentes en las últimas dos décadas. Sin embargo, la adopción cada vez mayor de metodologías de ML ha revelado importantes problemas de seguridad. Entre estos, destacan las vulnerabilidades a ejemplos adversarios, es decir; instancias de datos destinadas a engañar a los algoritmos de ML. Los ejemplos abundan: es relativamente fácil engañar a un detector de spam simplemente escribiendo mal algunas palabras características de los correos basura. La ofuscación de código malicioso (malware) puede hacer que parezca legítimo. Agregando unos parches a una señal de stop, se podría provocar que un vehículo autónomo la reconociese como una señal de dirección obligatoria. Cómo puede imaginar el lector, las consecuencias de estas vulnerabilidades pueden llegar a ser catastróficas. Y es que el machine learning está diseñado para trabajar en entornos estacionarios y benignos. Sin embargo, en ciertos escenarios, la presencia de adversarios que manipulan activamente los datos de entrada para engañar a los sistemas de ML(logrando así beneficios), rompen tales requisitos de estacionariedad. Las condiciones de entrenamiento y operación de los algoritmos ya no son idénticas, quebrándose una de las hipótesis fundamentales del ML. Esto crea una clase completamente nueva de vulnerabilidades que los sistemas basados en el aprendizaje automático deben enfrentar y una nueva propiedad deseable: la robustez adversaria. Si debemos confiaren las operaciones basadas en resultados del ML, es esencial que los sistemas de aprendizaje sean robustos a tales manipulaciones adversarias...Fac. de Ciencias MatemáticasTRUEunpu

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Distributional constraints on cognitive architecture

    Get PDF
    Mental chronometry is a classical paradigm in cognitive psychology that uses response time and accuracy data in perceptual-motor tasks to elucidate the architecture and mechanisms of the underlying cognitive processes of human decisions. The redundant signals paradigm investigates the response behavior in Experimental tasks, where an integration of signals is required for a successful performance. The common finding is that responses are speeded for the redundant signals condition compared to single signals conditions. On a mean level, this redundant signals effect can be accounted for by several cognitive architectures, exhibiting considerable model mimicry. Jeff Miller formalized the maximum speed-up explainable by separate activations or race models in form of a distributional bound – the race model inequality. Whenever data violates this bound, it excludes race models as a viable account for the redundant signals effect. The common alternative is a coactivation account, where the signals integrate at some stage in the processing. Coactivation models have mostly been inferred on and rarely explicated though. Where coactivation is explicitly modeled, it is assumed to have a decisional locus. However, in the literature there are indications that coactivation might have at least a partial locus (if not entirely) in the nondecisional or motor stage. There are no studies that have tried to compare the fit of these coactivation variants to empirical data to test different effect generating loci. Ever since its formulation, the race model inequality has been used as a test to infer the cognitive architecture for observers’ performance in redundant signals Experiments. Subsequent theoretical and empirical analyses of this RMI test revealed several challenges. On the one hand, it is considered to be a conservative test, as it compares data to the maximum speed-up possible by a race model account. Moreover, simulation studies could show that the base time component can further reduce the power of the test, as violations are filtered out when this component has a high variance. On the other hand, another simulation study revealed that the common practice of RMI test can introduce an estimation bias, that effectively facilitates violations and increases the type I error of the test. Also, as the RMI bound is usually tested at multiple points of the same data, an inflation of type I errors can reach a substantial amount. Due to the lack of overlap in scope and the usage of atheoretic, descriptive reaction time models, the degree to which these results can be generalized is limited. State-of-the-art models of decision making provide a means to overcome these limitations and implement both race and coactivation models in order to perform large scale simulation studies. By applying a state-of-the-art model of decision making (scilicet the Ratcliff diffusion model) to the investigation of the redundant signals effect, the present study addresses research questions at different levels. On a conceptual level, it raises the question, at what stage coactivation occurs – at a decisional, a nondecisional or a combined decisional and nondecisional processing stage and to what extend? To that end, two bimodal detection tasks have been conducted. As the reaction time data exhibits violations of the RMI at multiple time points, it provides the basis for a comparative fitting analysis of coactivation model variants, representing different loci of the effect. On a test theoretic level, the present study integrates and extends the scopes of previous studies within a coherent simulation framework. The effect of experimental and statistical parameters on the performance of the RMI test (in terms of type I errors, power rates and biases) is analyzed via Monte Carlo simulations. Specifically, the simulations treated the following questions: (i) what is the power of the RMI test, (ii) is there an estimation bias for coactivated data as well and if so, in what direction, (iii) what is the effect of a highly varying base time component on the estimation bias, type I errors and power rates, (iv) and are the results of previous simulation studies (at least qualitatively) replicable, when current models of decision making are used for the reaction time generation. For this purpose, the Ratcliff diffusion model was used to implement race models with controllable amount of correlation and coactivation models with varying integration strength, and independently specifying the base time component. The results of the fitting suggest that for the two bimodal detection tasks, coactivation has a shared decisional and nondecisional locus. For the focused attention experiment the decisional part prevails, whereas in the divided attention task the motor component is dominating the redundant signals effect. The simulation study could reaffirm the conservativeness of the RMI test as latent coactivation is frequently missed. An estimation bias was found also for coactivated data however, both biases become negligible once more than 10 samples per condition are taken to estimate the respective distribution functions. A highly varying base time component reduces both the type I errors and the power of the test, while not affecting the estimation biases. The outcome of the present study has theoretical and practical implications for the investigations of decisions in a multisignal context. Theoretically, it contributes to the locus question of coactivation and offers evidence for a combined decisional and nondecisional coactivation account. On a practical level, the modular simulation approach developed in the present study enables researchers to further investigate the RMI test within a coherent and theoretically grounded framework. It effectively provides a means to optimally set up the RMI test and thus helps to solidify and substantiate its outcomes. On a conceptual level the present study advocates the application of current formal models of decision making to the mental chronometry paradigm and develops future research questions in the field of the redundant signals paradigm
    corecore