278,126 research outputs found

    Anytime Point-Based Approximations for Large POMDPs

    Full text link
    The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact solutions in this framework are typically computationally intractable for all but the smallest problems. A well-known technique for speeding up POMDP solving involves performing value backups at specific belief points, rather than over the entire belief simplex. The efficiency of this approach, however, depends greatly on the selection of points. This paper presents a set of novel techniques for selecting informative belief points which work well in practice. The point selection procedure is combined with point-based value backups to form an effective anytime POMDP algorithm called Point-Based Value Iteration (PBVI). The first aim of this paper is to introduce this algorithm and present a theoretical analysis justifying the choice of belief selection technique. The second aim of this paper is to provide a thorough empirical comparison between PBVI and other state-of-the-art POMDP methods, in particular the Perseus algorithm, in an effort to highlight their similarities and differences. Evaluation is performed using both standard POMDP domains and realistic robotic tasks

    Bandit-based Random Mutation Hill-Climbing

    Get PDF
    The Random Mutation Hill-Climbing algorithm is a direct search technique mostly used in discrete domains. It repeats the process of randomly selecting a neighbour of a best-so-far solution and accepts the neighbour if it is better than or equal to it. In this work, we propose to use a novel method to select the neighbour solution using a set of independent multi-armed bandit-style selection units which results in a bandit-based Random Mutation Hill-Climbing algorithm. The new algorithm significantly outperforms Random Mutation Hill-Climbing in both OneMax (in noise-free and noisy cases) and Royal Road problems (in the noise-free case). The algorithm shows particular promise for discrete optimisation problems where each fitness evaluation is expensive

    Exploring Multi-Modal Distributions with Nested Sampling

    Full text link
    In performing a Bayesian analysis, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multi-modal or exhibit pronounced (curving) degeneracies. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. Nested Sampling is a Monte Carlo method targeted at the efficient calculation of the evidence, but also produces posterior inferences as a by-product and therefore provides means to carry out parameter estimation as well as model selection. The main challenge in implementing Nested Sampling is to sample from a constrained probability distribution. One possible solution to this problem is provided by the Galilean Monte Carlo (GMC) algorithm. We show results of applying Nested Sampling with GMC to some problems which have proven very difficult for standard Markov Chain Monte Carlo (MCMC) and down-hill methods, due to the presence of large number of local minima and/or pronounced (curving) degeneracies between the parameters. We also discuss the use of Nested Sampling with GMC in Bayesian object detection problems, which are inherently multi-modal and require the evaluation of Bayesian evidence for distinguishing between true and spurious detections.Comment: Refereed conference proceeding, presented at 32nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineerin

    Multi-Criteria Service Selection Agent for Federated Cloud

    Get PDF
    Federated cloud interconnects small and medium-sized cloud service providers for service enhancement to meet demand spikes. The service bartering technique in the federated cloud enables service providers to exchange their services. Selecting an optimal service provider to share services is challenging in the cloud federation. Agent-based and Reciprocal Resource Fairness (RRF) based models are used in the federated cloud for service selection. The agent-based model selects the best service provider using Quality of Service (quality of service). RRF model chooses fair service providers based on service providers\u27 previous service contribution to the federation. However, the models mentioned above fail to address free rider and poor performer problems during the service provider selection process. To solve the above issue, we propose a Multi-criteria Service Selection (MCSS) algorithm for effectively selecting a service provider using quality of service, Performance-Cost Ratio (PCR), and RRF. Comprehensive case studies are conducted to prove the effectiveness of the proposed algorithm. Extensive simulation experiments are conducted to compare the proposed algorithm performance with the existing algorithm. The evaluation results demonstrated that MCSS provides 10% more services selection efficiency than Cloud Resource Bartering System (CRBS) and provides 16% more service selection efficiency than RPF

    Comparison of a novel dominance-based differential evolution method with the state-of-the-art methods for solving multi-objective real-valued optimization problems

    Get PDF
    Differential Evolution algorithm (DE) is a well-known nature-inspired method in evolutionary computations scope. This paper adds some new features to DE algorithm and proposes a novel method focusing on ranking technique. The proposed method is named as Dominance-Based Differential Evolution, called DBDE from this point on, which is the improved version of the standard DE algorithm. The suggested DBDE applies some changes on the selection operator of the Differential Evolution (DE) algorithm and modifies the crossover and initialization phases to improve the performance of DE. The dominance ranks are used in the selection phase of DBDE to be capable of selecting higher quality solutions. A dominance-rank for solution X is the number of solutions dominating X. Moreover, some vectors called target vectors are used through the selection process. Effectiveness and performance of the proposed DBDE method is experimentally evaluated using six well-known benchmarks, provided by CEC2009, plus two additional test problems namely Kursawe and Fonseca & Fleming. The evaluation process emphasizes on specific bi-objective real-valued optimization problems reported in literature. Likewise, the Inverted Generational Distance (IGD) metric is calculated for the obtained results to measure the performance of algorithms. To follow up the evaluation rules obeyed by all state-of-the-art methods, the fitness evaluation function is called 300.000 times and 30 independent runs of DBDE is carried out. Analysis of the obtained results indicates that the performance of the proposed algorithm (DBDE) in terms of convergence and robustness outperforms the majority of state-of-the-art methods reported in the literatur

    Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis

    Full text link
    In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional MCMC sampling methods. Second, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive. The nested sampling method introduced by Skilling (2004), has greatly reduced the computational expense of calculating evidences and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee et al. (2006), but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw et al. (2007), recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical datasets.Comment: 14 pages, 11 figures, submitted to MNRAS, some major additions to the previous version in response to the referee's comment

    Time–Frequency Cepstral Features and Heteroscedastic Linear Discriminant Analysis for Language Recognition

    Get PDF
    The shifted delta cepstrum (SDC) is a widely used feature extraction for language recognition (LRE). With a high context width due to incorporation of multiple frames, SDC outperforms traditional delta and acceleration feature vectors. However, it also introduces correlation into the concatenated feature vector, which increases redundancy and may degrade the performance of backend classifiers. In this paper, we first propose a time-frequency cepstral (TFC) feature vector, which is obtained by performing a temporal discrete cosine transform (DCT) on the cepstrum matrix and selecting the transformed elements in a zigzag scan order. Beyond this, we increase discriminability through a heteroscedastic linear discriminant analysis (HLDA) on the full cepstrum matrix. By utilizing block diagonal matrix constraints, the large HLDA problem is then reduced to several smaller HLDA problems, creating a block diagonal HLDA (BDHLDA) algorithm which has much lower computational complexity. The BDHLDA method is finally extended to the GMM domain, using the simpler TFC features during re-estimation to provide significantly improved computation speed. Experiments on NIST 2003 and 2007 LRE evaluation corpora show that TFC is more effective than SDC, and that the GMM-based BDHLDA results in lower equal error rate (EER) and minimum average cost (Cavg) than either TFC or SDC approaches
    corecore