1,052 research outputs found

    Multiple Model Rao-Blackwellized Particle Filter for Manoeuvring Target Tracking

    Get PDF
    Particle filters can become quite inefficient when applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, a novel multiple model Rao-Blackwellized particle filter (MMRBPF)-based algorithm has been proposed for manoeuvring target tracking in a cluttered environment. The advantage of the proposed approach is that the Rao-Blackwellization allows the algorithm to be partitioned into target tracking and model selection sub-problems, where the target tracking can be solved by the probabilistic data association filter, and the model selection by sequential importance sampling. The analytical relationship between target state and model is exploited to improve the efficiency and accuracy of the proposed algorithm. Moreover, to reduce the particle-degeneracy problem, the resampling approach is selectively carried out. Finally, experiment results, show that the proposed algorithm, has advantages over the conventional IMM-PDAF algorithm in terms of robust and  efficiency.Defence Science Journal, 2009, 59(3), pp.197-204, DOI:http://dx.doi.org/10.14429/dsj.59.151

    Semantic Information G Theory and Logical Bayesian Inference for Machine Learning

    Get PDF
    An important problem with machine learning is that when label number n\u3e2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Envelopes of conditional probabilities extending a strategy and a prior probability

    Full text link
    Any strategy and prior probability together are a coherent conditional probability that can be extended, generally not in a unique way, to a full conditional probability. The corresponding class of extensions is studied and a closed form expression for its envelopes is provided. Then a topological characterization of the subclasses of extensions satisfying the further properties of full disintegrability and full strong conglomerability is given and their envelopes are studied.Comment: 2

    Segmenting the heterogeneity of tourist preferences using a latent class model combined with the EM algorithm

    Get PDF
    An important component of conjoint analysis is market segmentation where the main objective is to address the heterogeneity of consumer preferences. Latent class methodology is one of the several conjoint segmentation procedures that overcome the limitations of aggregate analysis and a-priori segmentation. The main benefit of Latent class models is that they simultaneously estimate market segment membership and parameter estimates for each derived market segment. In this paper we present two latent class models. The first model is a latent class metric model using mixtures of multivariate conditional normal distributions to analyze rating data. The second is a latent class multinomial logit model used to analyze choice data. The EM algorithm is employed to maximize the likelihood in both models. The application focuses on tourists’ preference and choice behaviour when assessing package tours. A number of demographic and product related explanatory variables are used to generate segments that are accessible and actionable. A Monte Carlo study is also presented in this paper. This study examines how the number of hypothetical subjects, number of specified segments and number of predictors affect the performance of the latent class metric conjoint model with respect to parameter recovery and segment membership recovery.peer-reviewe

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    A fuzzified systematic adjustment of the robotic Darwinian PSO

    Get PDF
    The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm adapting the behavior of robots based on a set of context-based evaluation metrics. Those metrics are then used as inputs of a fuzzy system so as to systematically adjust the RDPSO parameters (i.e., outputs of the fuzzy system), thus improving its convergence rate, susceptibility to obstacles and communication constraints. The adapted RDPSO is evaluated in groups of physical robots, being further explored using larger populations of simulated mobile robots within a larger scenario
    corecore