7 research outputs found
Facial metrics generated from manually and automatically placed image landmarks are highly correlated
Research on social judgments of faces often investigates relationships between measures of face shape taken from images (facial metrics), and either perceptual ratings of the faces on various traits (e.g., attractiveness) or characteristics of the photographed individual (e.g., their health). A barrier to carrying out this research using large numbers of face images is the time it takes to manually position the landmarks from which these facial metrics are derived. Although research in face recognition has led to the development of algorithms that can automatically position landmarks on face images, the utility of such methods for deriving facial metrics commonly used in research on social judgments of faces has not yet been established. Thus, across two studies, we investigated the correlations between four facial metrics commonly used in social perception research (sexual dimorphism, distinctiveness, bilateral asymmetry, and facial width to height ratio) when measured from manually and automatically placed landmarks. In the first study, in two independent sets of open access face images, we found that facial metrics derived from manually and automatically placed landmarks were typically highly correlated, in both raw and Procrustes-fitted representations. In study two, we investigated the potential for automatic landmark placement to differ between White and East Asian faces. We found that two metrics, facial width to height ratio and sexual dimorphism, were better approximated by automatic landmarks in East Asian faces. However, this difference was small, and easily corrected with outlier detection. These data validate the use of automatically placed landmarks for calculating facial metrics to use in research on social judgments of faces, but we urge caution in their use. We also provide a tutorial for the automatic placement of landmarks on face images
Sleep restriction alters children’s positive emotional responses but effects are moderated by anxiety
Background: An abundance of cross-sectional research links inadequate sleep with poor emotional health, but experimental studies in children are rare. Further, the impact of sleep loss is not uniform across individuals and pre-existing anxiety might potentiate the effects of poor sleep on children’s emotional functioning. Methods: A sample of 53 children (7–11 years, M = 9.0; 56% female) completed multimodal, assessments in the laboratory when rested and after two nights of sleep restriction (7 and 6 hr in bed, respectively). Sleep was monitored with polysomnography and actigraphy. Subjective reports of affect and arousal, psychophysiological reactivity and regulation, and objective emotional expression were examined during two emotional processing tasks, including one where children were asked to suppress their emotional responses. Results: After sleep restriction, deleterious alterations were observed in children’s affect, emotional arousal, facial expressions, and emotion regulation. These effects were primarily detected in response to positive emotional stimuli. The presence of anxiety symptoms moderated most alterations in emotional processing observed after sleep restriction. Conclusions: Results suggest inadequate sleep preferentially impacts positive compared to negative emotion in prepubertal children and that pre-existing anxiety symptoms amplify these effects. Implications for children’s everyday socioemotional lives and long-term affective risk are highlighted
Recommended from our members
Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State
Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion.
Recommended from our members
Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State
Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion.
Recommended from our members
Envisioning Identity: The Social Production of Computer Vision
Computer vision technologies have been increasingly scrutinized in recent years for their propensity to cause harm. Computer vision systems designed to interpret visual data about humans for various tasks are perceived as particularly high risk. Broadly, the harms of computer vision focus on demographic biases (favoring one group over another) and categorical injustices (through erasure, stereotyping, or problematic labels). Prior work has focused on both uncovering these harms and mitigating them, through, for example, better dataset collection practices and guidelines for more contextual data labeling. This research has largely focused on understanding discrete computer vision artifacts, such as datasets or model outputs, and their implications for specific identity groups or for privacy. There is opportunity to further understand how human identity is embedded into computer vision not only across these artifacts, but also across the network of human workers who shape computer vision systems.
This dissertation focuses on understanding how human identity is conceptualized across two different “layers” of computer vision: (1) at the artifact layer, where the classification ontology is deployed, in the form of datasets and model inputs and outputs; and (2) at the development layer, where social decisions are made about how to implement models and annotations by traditional tech workers. Specifically, I examine how identity is represented in artifacts and how those representations are derived from human workers. I demonstrate how human workers rely on their own subjective positionalities—the worldviews they hold as a result of their own identities and experiences.
I present six studies that identify the subjectivity of computer vision. Three studies focus on artifacts, both model outputs and datasets, to discuss how identity is currently implemented and how that implementation is embedded with specific disciplinary values that often clash with more sociocultural lenses on identity. The fourth and fifth studies focus on how human workers shape these artifacts. Through interviews with both traditional tech workers (like engineers and data scientists) and contingent data workers (who apply requirements given to them by traditional tech workers), I uncover how the positionality of human actors shapes identity in computer vision. Finally, in the sixth study, I examine how power operates between these two types of workers, traditional tech workers and data workers. Identity, as a concept, is treated as an infrastructure for which to build products. Workers attempt to uncover some underlying truth about identity and capture it in technical systems. However, in reality, workers reference the nebulous and intangible concept of identity to implement their own positional perspectives. I demonstrate that traditional tech workers have a positional power in the development of identity in computer vision; traditional worker positionalities are viewed as expert perspectives to be solidified into artifacts. Meanwhile, data worker positionalities are viewed as risks to the quality and trustworthiness of those artifacts. Thus, traditional tech workers attempt to control data worker positionalities, instilling in data workers their own positional perspectives.
By synthesizing insights from these six studies, this dissertation contributes a theory on identity in developing technical artifacts. I argue that identity concepts in the process of computer vision development move from open—filled with nuance, complexity, history, and opportunity—to closed—narrowly defined and embedded into artifacts that are deployed to reify a specific worldview of identity. I describe how workers pull from the intangible meta-concept of “Identity” to shape, through the process of development, specific Attributes to embed into technologies. I show how workers transform these Attributes through the development process into narrower and narrower definitions. These definitions of identity thus become Technical Attributes, highly specific implementations of identity which are no longer malleable to different perspectives.</p