306 research outputs found

    Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition

    Full text link
    Face quality assessment aims at estimating the utility of a face image for the purpose of recognition. It is a key factor to achieve high face recognition performances. Currently, the high performance of these face recognition systems come with the cost of a strong bias against demographic and non-demographic sub-groups. Recent work has shown that face quality assessment algorithms should adapt to the deployed face recognition system, in order to achieve highly accurate and robust quality estimations. However, this could lead to a bias transfer towards the face quality assessment leading to discriminatory effects e.g. during enrolment. In this work, we present an in-depth analysis of the correlation between bias in face recognition and face quality assessment. Experiments were conducted on two publicly available datasets captured under controlled and uncontrolled circumstances with two popular face embeddings. We evaluated four state-of-the-art solutions for face quality assessment towards biases to pose, ethnicity, and age. The experiments showed that the face quality assessment solutions assign significantly lower quality values towards subgroups affected by the recognition bias demonstrating that these approaches are biased as well. This raises ethical questions towards fairness and discrimination which future works have to address.Comment: Accepted at IJCB202

    Implementing and Evaluating Security in O-RAN: Interfaces, Intelligence, and Platforms

    Full text link
    The Open Radio Access Network (RAN) is a networking paradigm that builds on top of cloud-based, multi-vendor, open and intelligent architectures to shape the next generation of cellular networks for 5G and beyond. While this new paradigm comes with many advantages in terms of observatibility and reconfigurability of the network, it inevitably expands the threat surface of cellular systems and can potentially expose its components to several cyber attacks, thus making securing O-RAN networks a necessity. In this paper, we explore the security aspects of O-RAN systems by focusing on the specifications and architectures proposed by the O-RAN Alliance. We address the problem of securing O-RAN systems with an holistic perspective, including considerations on the open interfaces used to interconnect the different O-RAN components, on the overall platform, and on the intelligence used to monitor and control the network. For each focus area we identify threats, discuss relevant solutions to address these issues, and demonstrate experimentally how such solutions can effectively defend O-RAN systems against selected cyber attacks. This article is the first work in approaching the security aspect of O-RAN holistically and with experimental evidence obtained on a state-of-the-art programmable O-RAN platform, thus providing unique guideline for researchers in the field.Comment: 7 pages, 5 figures, 1 table, submitted to IEEE Network Magazin

    The INTERSPEECH 2020 Far-Field Speaker Verification Challenge

    Full text link
    The INTERSPEECH 2020 Far-Field Speaker Verification Challenge (FFSVC 2020) addresses three different research problems under well-defined conditions: far-field text-dependent speaker verification from single microphone array, far-field text-independent speaker verification from single microphone array, and far-field text-dependent speaker verification from distributed microphone arrays. All three tasks pose a cross-channel challenge to the participants. To simulate the real-life scenario, the enrollment utterances are recorded from close-talk cellphone, while the test utterances are recorded from the far-field microphone arrays. In this paper, we describe the database, the challenge, and the baseline system, which is based on a ResNet-based deep speaker network with cosine similarity scoring. For a given utterance, the speaker embeddings of different channels are equally averaged as the final embedding. The baseline system achieves minDCFs of 0.62, 0.66, and 0.64 and EERs of 6.27%, 6.55%, and 7.18% for task 1, task 2, and task 3, respectively.Comment: Submitted to INTERSPEECH 202

    THE EVIDENTIARY IMPLICATIONS OF INTERPRETING BLACK-BOX ALGORITHMS

    Get PDF
    Biased black-box algorithms have drawn increasing levels of scrutiny from the public. This is especially true for those black-box algorithms with the potential to negatively affect protected or vulnerable populations.1 One type of these black-box algorithms, a neural network, is both opaque and capable of high accuracy. However, neural networks do not provide insights into the relative importance, underlying relationships, structures of the predictors or covariates with the modelled outcomes.2 There are methods to combat a neural network’s lack of transparency: globally or locally interpretable post-hoc explanatory models.3 However, the threat of such measures usually does not bar an actor from deploying a black-box algorithm that generates unfair outcomes on racial, class, or gendered lines.4 Fortunately, researchers have recognized this issue and developed interpretability frameworks to better understand such black-box algorithms. One of these remedies, the Shapley Additive Explanation (“SHAP”) method, ranks determinative factors that led to the algorithm’s final decision and measures the partial effects of the independent variables that were used in the model.5 Another, the Local Interpretable Model-agnostic Explanations (“LIME”) method, uses a similar method to reverse-engineer the determinative factors harnessed by the algorithm.6 Both the SHAP/LIME methods have the potential to shine light into the most accurate, precise black-box algorithms. These black-box algorithms can harm peoples’ physical being and property interests.7 However, algorithm developers currently hide behind the nominally impenetrable nature of the algorithm to shield themselves from liability. These developers claim that black-box algorithms are the industry standard, due to the increased accuracy and precision that these algorithms typically possess. However, SHAP/LIME can ascertain which factors might be cloud the judgement of the algorithm, and therefore cause harm. As such, SHAP/LIME may lower the foreseeability threshold currently set by tort law and help consumer-rights advocates combat institutions which recklessly foist malevolent algorithms upon the public. Part II will provide an overview of the SHAP/LIME methods, as well as applying it to a tort scenario involving a self-driving car accident. Part III will cover the potential tort claims that may arise out of the self-driving car accident, and how SHAP/LIME would advance each of these claims. SHAP/LIME’s output has not yet been compared to the foreseeability threshold under negligence or product/service liability. There are numerous factors that sway SHAP/LIME both towards and against reaching that threshold. The implications of this are severe—if the foreseeability threshold is not reached, a finder of fact might not find fault with the algorithm generator. Part IV will cover the evidentiary objections that might arise when submitting SHAP/LIME-generated evidence for admission. Reverseengineering an algorithm mirrors crime scene re-creation. Thus, the evidentiary issues involved in recreating crime scenes appear when reverseengineering algorithms.8 Important questions on relevance, authenticity, and accessibility to the algorithm directly affect the viability of submitting evidence derived using either the SHAP or LIME methods.9 Part V will conclude by contextualizing the need for transparency within an increasingly algorithm-driven society. I conclude that tort law’s foreseeability threshold is currently not fit for purpose when it comes to delivering justice to victims of biased black-box algorithms. As for complying with the Federal Rules of Evidence, SHAP/LIME’s admissibility depends on the statistical confidence level of the method’s results. I conclude that SHAP/LIME generally have been properly tested and accepted by the scientific community, so it is probable that statistically relevant SHAP/LIME-generated evidence can be admitted.1

    How Physicality Enables Trust: A New Era of Trust-Centered Cyberphysical Systems

    Full text link
    Multi-agent cyberphysical systems enable new capabilities in efficiency, resilience, and security. The unique characteristics of these systems prompt a reevaluation of their security concepts, including their vulnerabilities, and mechanisms to mitigate these vulnerabilities. This survey paper examines how advancement in wireless networking, coupled with the sensing and computing in cyberphysical systems, can foster novel security capabilities. This study delves into three main themes related to securing multi-agent cyberphysical systems. First, we discuss the threats that are particularly relevant to multi-agent cyberphysical systems given the potential lack of trust between agents. Second, we present prospects for sensing, contextual awareness, and authentication, enabling the inference and measurement of ``inter-agent trust" for these systems. Third, we elaborate on the application of quantifiable trust notions to enable ``resilient coordination," where ``resilient" signifies sustained functionality amid attacks on multiagent cyberphysical systems. We refer to the capability of cyberphysical systems to self-organize, and coordinate to achieve a task as autonomy. This survey unveils the cyberphysical character of future interconnected systems as a pivotal catalyst for realizing robust, trust-centered autonomy in tomorrow's world
    • …
    corecore