194 research outputs found

    Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

    Full text link
    Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that they outperform approaches that rely on variational bounds based on complex generative models. We test our approach on UCI Adult and Heritage Health datasets and demonstrate that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.Comment: This version fixes an error in Theorem 2 of the original manuscript that appeared at the Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21). Code is available at https://github.com/umgupta/fairness-via-contrastive-estimatio

    A Study of Synchronization Techniques for Optical Communication Systems

    Get PDF
    The study of synchronization techniques and related topics in the design of high data rate, deep space, optical communication systems was reported. Data cover: (1) effects of timing errors in narrow pulsed digital optical systems, (2) accuracy of microwave timing systems operating in low powered optical systems, (3) development of improved tracking systems for the optical channel and determination of their tracking performance, (4) development of usable photodetector mathematical models for application to analysis and performance design in communication receivers, and (5) study application of multi-level block encoding to optical transmission of digital data

    Statistical methods for scale-invariant and multifractal stochastic processes.

    Get PDF
    This thesis focuses on stochastic modeling, and statistical methods, in finance and in climate science. Two financial markets, short-term interest rates and electricity prices, are analyzed. We find that the evidence of mean reversion in short-term interest rates is week, while the “log-returns” of electricity prices have significant anti-correlations. More importantly, empirical analyses confirm the multifractal nature of these financial markets, and we propose multifractal models that incorporate the specific conditional mean reversion and level dependence. A second topic in the thesis is the analysis of regional (5◦ × 5◦ and 2◦ × 2◦ latitude- longitude) globally gridded surface temperature series for the time period 1900-2014, with respect to a linear trend and long-range dependence. We find statistically significant trends in most regions. However, we also demonstrate that the existence of a second scaling regime on decadal time scales will have an impact on trend detection. The last main result is an approximative maximum likelihood (ML) method for the log- normal multifractal random walk. It is shown that the ML method has applications beyond parameter estimation, and can for instance be used to compute various risk measures in financial markets

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio
    • …
    corecore