240 research outputs found

    Complexity of Leading Digit Sequences

    Full text link
    Let Sa,bS_{a,b} denote the sequence of leading digits of ana^n in base bb. It is well known that if aa is not a rational power of bb, then the sequence Sa,bS_{a,b} satisfies Benford's Law; that is, digit dd occurs in Sa,bS_{a,b} with frequency log⁑b(1+1/d)\log_{b}(1+1/d), for d=1,2,…,bβˆ’1d=1,2,\dots,b-1. In this paper, we investigate the \emph{complexity} of such sequences. We focus mainly on the \emph{block complexity}, pa,b(n)p_{a,b}(n), defined as the number of distinct blocks of length nn appearing in Sa,bS_{a,b}. In our main result we determine pa,b(n)p_{a,b}(n) for all squarefree bases bβ‰₯5b\ge 5 and all rational numbers a>0a>0 that are not integral powers of bb. In particular, we show that, for all such pairs (a,b)(a,b), the complexity function pa,b(n)p_{a,b}(n) is \emph{affine}, i.e., satisfies pa,b(n)=ca,bn+da,bp_{a,b}(n)=c_{a,b} n + d_{a,b} for all nβ‰₯1n\ge1, with coefficients ca,bβ‰₯1c_{a,b}\ge1 and da,bβ‰₯0d_{a,b}\ge0, given explicitly in terms of aa and bb. We also show that the requirement that bb be squarefree cannot be dropped: If bb is not squarefree, then there exist integers aa with 1<a<b1<a<b for which pa,b(n)p_{a,b}(n) is not of the above form. We use this result to obtain sharp upper and lower bounds for pa,b(n)p_{a,b}(n), and to determine the asymptotic behavior of this function as bβ†’βˆžb\to\infty through squarefree values. We also consider the question which linear functions p(n)=cn+dp(n)=cn+d arise as the complexity function pa,b(n)p_{a,b}(n) of some leading digit sequence Sa,bS_{a,b}. We conclude with a discussion of other complexity measures for the sequences Sa,bS_{a,b} and some open problems

    Improving garment thermal insulation property by combining two non-contact measuring tools

    Get PDF
    To investigate the effect of air gaps on the heat transfer performance of clothing, the method using the combination of two non-contact measuring tools (infrared thermal camera and 3D body scanner) has been developed considering the quantification of the air gap thickness and clothing surface temperature of different body parts without contacting clothing surface directly. The results show that the air gaps over middle and lower back of upper body have the largest thickness in all body parts, while the front and back shoulders have the smallest air gap thickness. The one-way analysis of variance shows that air gap thickness under shoulder segments has no significant difference in terms of size. Furthermore, clothing surface temperatures of shoulder and chest decrease gradually along with air gap thickness; clothing surface temperatures of front abdomen, front waist, pelvis and hip segments decrease initially but begin to increase when the air gap is above 1.5cm; clothing surface temperatures of middle back and back waist continually increase with the air gap thickness. Based on the comprehensive analyzation of the distributed features of air gap thickness and clothing surface temperature of different body parts, a revised clothing pattern with lower regional temperature and higher thermal insulation is put forward

    More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

    Full text link
    An increased awareness concerning risks of algorithmic bias has driven a surge of efforts around bias mitigation strategies. A vast majority of the proposed approaches fall under one of two categories: (1) imposing algorithmic fairness constraints on predictive models, and (2) collecting additional training samples. Most recently and at the intersection of these two categories, methods that propose active learning under fairness constraints have been developed. However, proposed bias mitigation strategies typically overlook the bias presented in the observed labels. In this work, we study fairness considerations of active data collection strategies in the presence of label bias. We first present an overview of different types of label bias in the context of supervised learning systems. We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem. Our results illustrate the unintended consequences of deploying a model that attempts to mitigate a single type of bias while neglecting others, emphasizing the importance of explicitly differentiating between the types of bias that fairness-aware algorithms aim to address, and highlighting the risks of neglecting label bias during data collection

    Mitigating Label Bias via Decoupled Confident Learning

    Full text link
    Growing concerns regarding algorithmic fairness have led to a surge in methodologies to mitigate algorithmic bias. However, such methodologies largely assume that observed labels in training data are correct. This is problematic because bias in labels is pervasive across important domains, including healthcare, hiring, and content moderation. In particular, human-generated labels are prone to encoding societal biases. While the presence of labeling bias has been discussed conceptually, there is a lack of methodologies to address this problem. We propose a pruning method -- Decoupled Confident Learning (DeCoLe) -- specifically designed to mitigate label bias. After illustrating its performance on a synthetic dataset, we apply DeCoLe in the context of hate speech detection, where label bias has been recognized as an important challenge, and show that it successfully identifies biased labels and outperforms competing approaches.Comment: AI & HCI Workshop at the 40th International Conference on Machine Learning (ICML), Honolulu, Hawaii, USA. 202
    • …
    corecore