148,768 research outputs found

    The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds

    Get PDF
    Face biometrics are playing a key role in making modern smart city applications more secure and usable. Commonly, the recognition threshold of a face recognition system is adjusted based on the degree of security for the considered use case. The likelihood of a match can be for instance decreased by setting a high threshold in case of a payment transaction verification. Prior work in face recognition has unfortunately showed that error rates are usually higher for certain demographic groups. These disparities have hence brought into question the fairness of systems empowered with face biometrics. In this paper, we investigate the extent to which disparities among demographic groups change under different security levels. Our analysis includes ten face recognition models, three security thresholds, and six demographic groups based on gender and ethnicity. Experiments show that the higher the security of the system is, the higher the disparities in usability among demographic groups are. Compelling unfairness issues hence exist and urge countermeasures in real-world high-stakes environments requiring severe security levels.Comment: Accepted as a full paper at the 2nd International Workshop on Artificial Intelligence Methods for Smart Cities (AISC 2022

    Comparative study on the performance of face recognition algorithms

    Get PDF
    Facial and object recognition are more and more applied in our life. Therefore, this field has become important to both academicians and practitioners. Face recognition systems are complex systems using features of the face to recognize. Current face recognition systems may be used to increase work efficiency in various methods, including smart homes, online banking, traffic, sports, robots, and others. With various applications like this, the number of facial recognition methods has been increasing in recent years. However, the performance of face recognition systems can be significantly affected by various factors such as lighting conditions, and different types of masks (sunglasses, scarves, hats, etc.). In this paper, a detailed comparison between face recognition techniques is exposed by listing the structure of each model, the advantages and disadvantages as well as performing experiments to demonstrate the robustness, accuracy, and complexity of each algorithm. To be detailed, let’s give a performance comparison of three methods for measuring the efficacy of face recognition systems including a support vector machine (SVM), a visual geometry group with 16 layers (VGG-16), and a residual network with 50 layers (ResNet-50) in real-life settings. The efficiency of algorithms is evaluated in various environments such as normal light indoors, backlit indoors, low light indoors, natural light outdoors, and backlit outdoors. In addition, this paper also evaluates faces with hats and glasses to examine the accuracy of the methods. The experimental results indicate that the ResNet-50 has the highest accuracy to identify faces. The time to recognize is ranging from 1.1s to 1.2s in the normal environmen

    Advances of Robust Subspace Face Recognition

    Get PDF
    Face recognition has been widely applied in fast video surveillance and security systems and smart home services in our daily lives. Over past years, subspace projection methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), are the well-known algorithms for face recognition. Recently, linear regression classification (LRC) is one of the most popular approaches through subspace projection optimizations. However, there are still many problems unsolved in severe conditions with different environments and various applications. In this chapter, the practical problems including partial occlusion, illumination variation, different expression, pose variation, and low resolution are addressed and solved by several improved subspace projection methods including robust linear regression classification (RLRC), ridge regression (RR), improved principal component regression (IPCR), unitary regression classification (URC), linear discriminant regression classification (LDRC), generalized linear regression classification (GLRC) and trimmed linear regression (TLR). Experimental results show that these methods can perform well and possess high robustness against problems of partial occlusion, illumination variation, different expression, pose variation and low resolution

    Traffic Light Recognition for Real Scenes Based on Image Processing and Deep Learning

    Get PDF
    Traffic light recognition in urban environments is crucial for vehicle control. Many studies have been devoted to recognizing traffic lights. However, existing recognition methods still face many challenges in terms of accuracy, runtime and size. This paper presents a novel robust traffic light recognition approach that takes into account these three aspects based on image processing and deep learning. The proposed approach adopts a two-stage architecture, first performing detection and then classification. In the detection, the perspective relationship and the fractal dimension are both considered to dramatically reduce the number of invalid candidate boxes, i.e. region proposals. In the classification, the candidate boxes are classified by SqueezeNet. Finally, the recognized traffic light boxes are reshaped by postprocessing. Compared with several reference models, this approach is significantly competitive in terms of accuracy and runtime. We show that our approach is lightweight, easy to implement, and applicable to smart terminals, mobile devices or embedded devices in practice

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Smart Exposition Rooms: The Ambient Intelligence View

    Get PDF
    We introduce our research on smart environments, in particular research on smart meeting rooms and investigate how research approaches here can be used in the context of smart museum environments. We distinguish the identification of domain knowledge, its use in sensory perception, its use in interpretation and modeling of events and acts in smart environments and we have some observations on off-line browsing and on-line remote participation in events in smart environments. It is argued that large-scale European research in the area of ambient intelligence will be an impetus to the research and development of smart galleries and museum spaces
    corecore