1,583 research outputs found

    Weighted Contrastive Divergence

    Get PDF
    Learning algorithms for energy based Boltzmann architectures that rely on gradient descent are in general computationally prohibitive, typically due to the exponential number of terms involved in computing the partition function. In this way one has to resort to approximation schemes for the evaluation of the gradient. This is the case of Restricted Boltzmann Machines (RBM) and its learning algorithm Contrastive Divergence (CD). It is well-known that CD has a number of shortcomings, and its approximation to the gradient has several drawbacks. Overcoming these defects has been the basis of much research and new algorithms have been devised, such as persistent CD. In this manuscript we propose a new algorithm that we call Weighted CD (WCD), built from small modifications of the negative phase in standard CD. However small these modifications may be, experimental work reported in this paper suggest that WCD provides a significant improvement over standard CD and persistent CD at a small additional computational cost

    Applications of information theory in filtering and sensor management

    Get PDF
    “A classical sensor tasking methodology is analyzed in the context of generating sensor schedules for monitoring resident space objects (RSOs). This approach, namely maximizing the expected Kullback-Leibler divergence in a measurement update, is evaluated from a probabilistic perspective to determine the accuracy of the conventional approach. In this investigation, a newdivergence-based approach is proposed to circumvent themyopic nature of the measure, forecasting the potential information contribution to a time of interest and leveraging the system dynamics and measurement model to do so. The forecasted objective exploits properties of a batch measurement update to frequently exhibit faster optimization times when compared to an accumulation of the conventional myopic employment. The forecasting approach additionally affords the ability to emphasize tracking performance at the point in time to which the information is mapped. The forecasted divergence is lifted into the multitarget domain and combined with a collision entropy objective. The addition of the collision consideration assists the tasking policy in avoiding scenarios in which determining the origin of a measurement is difficult, ameliorating issues when executing the sensor schedule. The properties of the divergencebased and collision entropy-based objectives are explored to determine appropriate optimization schemes that can enable their use in real-time application. It is demonstrated through a single-target tasking simulation that the forecasted measure successfully outperforms traditional approaches with regard to tracking performance at the forecasted time. This simulation is followed by a multitarget tasking scenario in which different optimization strategies are analyzed, illustrating the feasibility of the proposed tasking policy and evaluating the solution from both schedule quality and runtime perspectives”--Abstract, page iii

    Decentralized Riemannian Particle Filtering with Applications to Multi-Agent Localization

    Get PDF
    The primary focus of this research is to develop consistent nonlinear decentralized particle filtering approaches to the problem of multiple agent localization. A key aspect in our development is the use of Riemannian geometry to exploit the inherently non-Euclidean characteristics that are typical when considering multiple agent localization scenarios. A decentralized formulation is considered due to the practical advantages it provides over centralized fusion architectures. Inspiration is taken from the relatively new field of information geometry and the more established research field of computer vision. Differential geometric tools such as manifolds, geodesics, tangent spaces, exponential, and logarithmic mappings are used extensively to describe probabilistic quantities. Numerous probabilistic parameterizations were identified, settling on the efficient square-root probability density function parameterization. The square-root parameterization has the benefit of allowing filter calculations to be carried out on the well studied Riemannian unit hypersphere. A key advantage for selecting the unit hypersphere is that it permits closed-form calculations, a characteristic that is not shared by current solution approaches. Through the use of the Riemannian geometry of the unit hypersphere, we are able to demonstrate the ability to produce estimates that are not overly optimistic. Results are presented that clearly show the ability of the proposed approaches to outperform current state-of-the-art decentralized particle filtering methods. In particular, results are presented that emphasize the achievable improvement in estimation error, estimator consistency, and required computational burden

    Online Optimization Methods for the Quantification Problem

    Full text link
    The estimation of class prevalence, i.e., the fraction of a population that belongs to a certain class, is a very useful tool in data analytics and learning, and finds applications in many domains such as sentiment analysis, epidemiology, etc. For example, in sentiment analysis, the objective is often not to estimate whether a specific text conveys a positive or a negative sentiment, but rather estimate the overall distribution of positive and negative sentiments during an event window. A popular way of performing the above task, often dubbed quantification, is to use supervised learning to train a prevalence estimator from labeled data. Contemporary literature cites several performance measures used to measure the success of such prevalence estimators. In this paper we propose the first online stochastic algorithms for directly optimizing these quantification-specific performance measures. We also provide algorithms that optimize hybrid performance measures that seek to balance quantification and classification performance. Our algorithms present a significant advancement in the theory of multivariate optimization and we show, by a rigorous theoretical analysis, that they exhibit optimal convergence. We also report extensive experiments on benchmark and real data sets which demonstrate that our methods significantly outperform existing optimization techniques used for these performance measures.Comment: 26 pages, 6 figures. A short version of this manuscript will appear in the proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 201
    • …
    corecore