32,851 research outputs found

    Simplifying the Design of Knowledge-Based Algorithms Using Knowledge Consistency

    Get PDF
    AbstractProcessor knowledge is an important tool in the study of distributed computer systems. It has led to better understanding of existing algorithms for such systems and to the development of new knowledge-based algorithms. Some of these algorithms use forms of knowledge (e.g., common knowledge) that cannot be achieved in certain systems. This paper considers alternative interpretations of knowledge under which these forms of knowledge can be achieved. It explores consistent knowledge interpretations and shows how they can be used to circumvent the known impossibility results in a number of cases. This may lead to greater applicability of knowledge-based algorithms

    A robust sequential hypothesis testing method for brake squeal localisation

    Get PDF
    This contribution deals with the in situ detection and localisation of brake squeal in an automobile. As brake squeal is emitted from regions known a priori, i.e., near the wheels, the localisation is treated as a hypothesis testing problem. Distributed microphone arrays, situated under the automobile, are used to capture the directional properties of the sound field generated by a squealing brake. The spatial characteristics of the sampled sound field is then used to formulate the hypothesis tests. However, in contrast to standard hypothesis testing approaches of this kind, the propagation environment is complex and time-varying. Coupled with inaccuracies in the knowledge of the sensor and source positions as well as sensor gain mismatches, modelling the sound field is difficult and standard approaches fail in this case. A previously proposed approach implicitly tried to account for such incomplete system knowledge and was based on ad hoc likelihood formulations. The current paper builds upon this approach and proposes a second approach, based on more solid theoretical foundations, that can systematically account for the model uncertainties. Results from tests in a real setting show that the proposed approach is more consistent than the prior state-of-the-art. In both approaches, the tasks of detection and localisation are decoupled for complexity reasons. The localisation (hypothesis testing) is subject to a prior detection of brake squeal and identification of the squeal frequencies. The approaches used for the detection and identification of squeal frequencies are also presented. The paper, further, briefly addresses some practical issues related to array design and placement. (C) 2019 Author(s)

    MODELLING EXPECTATIONS WITH GENEFER- AN ARTIFICIAL INTELLIGENCE APPROACH

    Get PDF
    Economic modelling of financial markets means to model highly complex systems in which expectations can be the dominant driving forces. Therefore it is necessary to focus on how agents form their expectations. We believe that they look for patterns, hypothesize, try, make mistakes, learn and adapt. AgentsÆ bounded rationality leads us to a rule-based approach which we model using Fuzzy Rule-Bases. E. g. if a single agent believes the exchange rate is determined by a set of possible inputs and is asked to put their relationship in words his answer will probably reveal a fuzzy nature like: "IF the inflation rate in the EURO-Zone is low and the GDP growth rate is larger than in the US THEN the EURO will rise against the USD". æLowÆ and ælargerÆ are fuzzy terms which give a gradual linguistic meaning to crisp intervalls in the respective universes of discourse. In order to learn a Fuzzy Fuzzy Rule base from examples we introduce Genetic Algorithms and Artificial Neural Networks as learning operators. These examples can either be empirical data or originate from an economic simulation model. The software GENEFER (GEnetic NEural Fuzzy ExplorER) has been developed for designing such a Fuzzy Rule Base. The design process is modular and comprises Input Identification, Fuzzification, Rule-Base Generating and Rule-Base Tuning. The two latter steps make use of genetic and neural learning algorithms for optimizing the Fuzzy Rule-Base.

    Dynamic Parameter Allocation in Parameter Servers

    Full text link
    To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management---a key concern in distributed training---, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers

    GraphLab: A New Framework for Parallel Machine Learning

    Full text link
    Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems
    • …
    corecore