390,225 research outputs found

    An efficient message passing algorithm for multi-target tracking

    Get PDF
    We propose a new approach for multi-sensor multi-target tracking by constructing statistical models on graphs with continuous-valued nodes for target states and discrete-valued nodes for data association hypotheses. These graphical representations lead to message-passing algorithms for the fusion of data across time, sensor, and target that are radically different than algorithms such as those found in state-of-the-art multiple hypothesis tracking (MHT) algorithms. Important differences include: (a) our message-passing algorithms explicitly compute different probabilities and estimates than MHT algorithms; (b) our algorithms propagate information from future data about past hypotheses via messages backward in time (rather than doing this via extending track hypothesis trees forward in time); and (c) the combinatorial complexity of the problem is manifested in a different way, one in which particle-like, approximated, messages are propagated forward and backward in time (rather than hypotheses being enumerated and truncated over time). A side benefit of this structure is that it automatically provides smoothed target trajectories using future data. A major advantage is the potential for low-order polynomial (and linear in some cases) dependency on the length of the tracking interval N, in contrast with the exponential complexity in N for so-called N-scan algorithms. We provide experimental results that support this potential. As a result, we can afford to use longer tracking intervals, allowing us to incorporate out-of-sequence data seamlessly and to conduct track-stitching when future data provide evidence that disambiguates tracks well into the past

    Efficient Conditionally Invariant Representation Learning

    Get PDF
    We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features φ(X) of data X to estimate a target Y , while being conditionally independent of a distractor Z given Y . Both Z and Y are assumed to be continuous-valued but relatively low dimensional, whereas X and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from Y to kernelized features of Z, which can be done in advance. It is then only necessary to enforce independence of φ(X) from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if φ(X) ⊥⊥ Z | Y . In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features

    Robust Concept Erasure via Kernelized Rate-Distortion Maximization

    Full text link
    Distributed representations provide a vector space that captures meaningful relationships between data instances. The distributed nature of these representations, however, entangles together multiple attributes or concepts of data instances (e.g., the topic or sentiment of a text, characteristics of the author (age, gender, etc), etc). Recent work has proposed the task of concept erasure, in which rather than making a concept predictable, the goal is to remove an attribute from distributed representations while retaining other information from the original representation space as much as possible. In this paper, we propose a new distance metric learning-based objective, the Kernelized Rate-Distortion Maximizer (KRaM), for performing concept erasure. KRaM fits a transformation of representations to match a specified distance measure (defined by a labeled concept to erase) using a modified rate-distortion function. Specifically, KRaM's objective function aims to make instances with similar concept labels dissimilar in the learned representation space while retaining other information. We find that optimizing KRaM effectively erases various types of concepts: categorical, continuous, and vector-valued variables from data representations across diverse domains. We also provide a theoretical analysis of several properties of KRaM's objective. To assess the quality of the learned representations, we propose an alignment score to evaluate their similarity with the original representation space. Additionally, we conduct experiments to showcase KRaM's efficacy in various settings, from erasing binary gender variables in word embeddings to vector-valued variables in GPT-3 representations.Comment: NeurIPS 202

    Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-valued Model Output

    Full text link
    How can we quantify uncertainty if our favorite computational tool - be it a numerical, a statistical, or a machine learning approach, or just any computer model - provides single-valued output only? In this article, we introduce the Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output-outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced Isotonic Distributional Regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both, stochastic monotonicity with respect to the original model output, and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. We use simulation examples and forecast data from weather prediction to illustrate the techniques. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning, and find EasyUQ to be competitive with conformal prediction, as well as more elaborate input-based approaches

    Improved Modeling of the Correlation Between Continuous-Valued Sources in LDPC-Based DSC

    Full text link
    Accurate modeling of the correlation between the sources plays a crucial role in the efficiency of distributed source coding (DSC) systems. This correlation is commonly modeled in the binary domain by using a single binary symmetric channel (BSC), both for binary and continuous-valued sources. We show that "one" BSC cannot accurately capture the correlation between continuous-valued sources; a more accurate model requires "multiple" BSCs, as many as the number of bits used to represent each sample. We incorporate this new model into the DSC system that uses low-density parity-check (LDPC) codes for compression. The standard Slepian-Wolf LDPC decoder requires a slight modification so that the parameters of all BSCs are integrated in the log-likelihood ratios (LLRs). Further, using an interleaver the data belonging to different bit-planes are shuffled to introduce randomness in the binary domain. The new system has the same complexity and delay as the standard one. Simulation results prove the effectiveness of the proposed model and system.Comment: 5 Pages, 4 figures; presented at the Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 201

    What is traditional acupuncture - exploring goals and processes of treatment in the context of women with early breast cancer

    Get PDF
    Background: Despite the increasing popularity of acupuncture, there remains uncertainty as to its effectiveness and how it brings about change. Particular questions are posed over whether acupuncture research has sufficient model validity and reflects acupuncture as practised. Exploring traditional acupuncture (TA) in practice should help to expose processes essential to the theory of TA. The aim of this study was to examine what TA practitioners aim to achieve, their rationale and how they follow this through in their practice. Methods: A longitudinal study of TA for women with early breast cancer (EBC) was performed. Study participants comprised 14 women with EBC and two experienced TA practitioners, all taking part in in-depth interviews, conducted before and after receipt of up to 10 treatment sessions, and analysed using grounded theory methods. Additional data came from practitioner treatment logs and diaries. Results: Practitioners sought long-term goals of increasing strength and enabling coping as well as immediate relief of symptoms. They achieved this through a continuous process of treatment, following through the recursive and individualized nature of TA and adjusted, via differential diagnosis, to the rapidly fluctuating circumstances of individual women. Establishing trust and good rapport with the women aided disclosure which was seen as essential in order to clarify goals during chemotherapy. This process was carefully managed by the practitioners and the resultant therapeutic relationship was highly valued by the women. Conclusion: This study provided insight into the interdependent components of TA helping to demonstrate the multiple causal pathways to change through the continuous process of new information, insights and treatment changes. A good therapeutic relationship was not simply something valued by patients but explicitly used by practitioners to aid disclosure which in turn affected details of the treatment. The therapeutic relationship was therefore a vital and integral part of the treatment process
    • …
    corecore