46,424 research outputs found

    Assentication: User Deauthentication and Lunchtime Attack Mitigation with Seated Posture Biometric

    Full text link
    Biometric techniques are often used as an extra security factor in authenticating human users. Numerous biometrics have been proposed and evaluated, each with its own set of benefits and pitfalls. Static biometrics (such as fingerprints) are geared for discrete operation, to identify users, which typically involves some user burden. Meanwhile, behavioral biometrics (such as keystroke dynamics) are well suited for continuous, and sometimes more unobtrusive, operation. One important application domain for biometrics is deauthentication, a means of quickly detecting absence of a previously authenticated user and immediately terminating that user's active secure sessions. Deauthentication is crucial for mitigating so called Lunchtime Attacks, whereby an insider adversary takes over (before any inactivity timeout kicks in) authenticated state of a careless user who walks away from her computer. Motivated primarily by the need for an unobtrusive and continuous biometric to support effective deauthentication, we introduce PoPa, a new hybrid biometric based on a human user's seated posture pattern. PoPa captures a unique combination of physiological and behavioral traits. We describe a low cost fully functioning prototype that involves an office chair instrumented with 16 tiny pressure sensors. We also explore (via user experiments) how PoPa can be used in a typical workplace to provide continuous authentication (and deauthentication) of users. We experimentally assess viability of PoPa in terms of uniqueness by collecting and evaluating posture patterns of a cohort of users. Results show that PoPa exhibits very low false positive, and even lower false negative, rates. In particular, users can be identified with, on average, 91.0% accuracy. Finally, we compare pros and cons of PoPa with those of several prominent biometric based deauthentication techniques

    Eye movements in surgery: A literature review

    Get PDF
    With recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices, developed techniques to assess surgical skill on the basis of eye movements, and examined the role of eye movements in surgical training. We here provide an overview of these studies with a focus on the methodological aspects. We conclude that the different studies of eye movements in surgery suggest that the recording of eye movements may be beneficial both for skill assessment and training purposes, although more research will be needed in this field

    Personalization of Saliency Estimation

    Full text link
    Most existing saliency models use low-level features or task descriptions when generating attention predictions. However, the link between observer characteristics and gaze patterns is rarely investigated. We present a novel saliency prediction technique which takes viewers' identities and personal traits into consideration when modeling human attention. Instead of only computing image salience for average observers, we consider the interpersonal variation in the viewing behaviors of observers with different personal traits and backgrounds. We present an enriched derivative of the GAN network, which is able to generate personalized saliency predictions when fed with image stimuli and specific information about the observer. Our model contains a generator which generates grayscale saliency heat maps based on the image and an observer label. The generator is paired with an adversarial discriminator which learns to distinguish generated salience from ground truth salience. The discriminator also has the observer label as an input, which contributes to the personalization ability of our approach. We evaluate the performance of our personalized salience model by comparison with a benchmark model along with other un-personalized predictions, and illustrate improvements in prediction accuracy for all tested observer groups

    Looks of Love and Loathing:Cultural Models of Vision and Emotion in Ancient Greek Culture

    Get PDF
    International audienceThis paper considers the intersection of cultural models of emotion, specifically love and envy, with folk and scientific models of vision in Greek antiquity. Though the role of the eyes in the expression of these emotions can intersect with widespread beliefs in vision as a «haptic», material process, analogous to touch and involving physical contact between perceiver and perceived, none the less the emotional concepts resist absorption into a single over-arching theory of the physical effects of seeing and being seen. The specific cultural models of vision («active», «passive», and «interactive») are enlisted in support of cultural models of emotion where they fit, modified where they fit less well, and ignored when they do not fit at all.Cette Ă©tude s’occupe de l’interaction des modĂšles culturels de l’émotion. Plus particuliĂšrement elle examine l’amour et la jalousie, notamment en rapport avec les modĂšles populaires et scientifiques de la vision tels qu’ils Ă©taient perçus dans l’antiquitĂ©. Alors que le rĂŽle jouĂ© par les yeux dans l’expression des Ă©motions s’entrelace avec les croyances largement rĂ©pandues sur la vision comme processus matĂ©riel tactile Ă©quivalent au touchĂ© et provoquant un contact physique direct entre celui qui perçoit et celui qui est perçu, les concepts Ă©motionnels semblent rĂ©sister Ă  une thĂ©orie unique et globalisante couvrant les effets physiques du regard et de la vue. Les modĂšles culturels spĂ©cifiques de la vision (« active », « passive » ou « interactive ») sont ici cataloguĂ©s Ă  l’aide des formes culturelles de l’émotion lĂ  oĂč ils sont appropriĂ©s ; ils sont modifiĂ©s lĂ  oĂč ils sont moins adĂ©quats, voire nĂ©gligĂ©s quand ils n’ont pas de pertinence

    Proximity and gaze influences facial temperature:a thermal infrared imaging study

    Get PDF
    Direct gaze and interpersonal proximity are known to lead to changes in psycho-physiology, behaviour and brain function. We know little, however, about subtler facial reactions such as rise and fall in temperature, which may be sensitive to contextual effects and functional in social interactions. Using thermal infrared imaging cameras 18 female adult participants were filmed at two interpersonal distances (intimate and social) and two gaze conditions (averted and direct). The order of variation in distance was counterbalanced: half the participants experienced a female experimenter’s gaze at the social distance first before the intimate distance (a socially ‘normal’ order) and half experienced the intimate distance first and then the social distance (an odd social order). At both distances averted gaze always preceded direct gaze. We found strong correlations in thermal changes between six areas of the face (forehead, chin, cheeks, nose, maxilliary and periorbital regions) for all experimental conditions and developed a composite measure of thermal shifts for all analyses. Interpersonal proximity led to a thermal rise, but only in the ‘normal’ social order. Direct gaze, compared to averted gaze, led to a thermal increase at both distances with a stronger effect at intimate distance, in both orders of distance variation. Participants reported direct gaze as more intrusive than averted gaze, especially at the intimate distance. These results demonstrate the powerful effects of another person’s gaze on psycho-physiological responses, even at a distance and independent of context

    Using social robots to study abnormal social development

    Get PDF
    Social robots recognize and respond to human social cues with appropriate behaviors. Social robots, and the technology used in their construction, can be unique tools in the study of abnormal social development. Autism is a pervasive developmental disorder that is characterized by social and communicative impairments. Based on three years of integration and immersion with a clinical research group which performs more than 130 diagnostic evaluations of children for autism per year, this paper discusses how social robots will make an impact on the ways in which we diagnose, treat, and understand autism

    The N170 event-related potential differentiates congruent and incongruent gaze responses in gaze leading

    Get PDF
    To facilitate social interactions, humans need to process the responses that other people make to their actions, including eye movements that could establish joint attention. Here, we investigated the neurophysiological correlates of the processing of observed gaze responses following the participants’ own eye movement. These observed gaze responses could either establish, or fail to establish, joint attention. We implemented a gaze leading paradigm in which participants made a saccade from an on-screen face to an object, followed by the on-screen face either making a congruent or incongruent gaze shift. An N170 event-related potential was elicited by the peripherally located gaze shift stimulus. Critically, the N170 was greater for joint attention than non-joint gaze both when task-irrelevant (Experiment 1) and task-relevant (Experiment 2). These data suggest for the first time that the neurocognitive system responsible for structural encoding of face stimuli is affected by the establishment of participant-initiated joint attention

    Domain general learning: Infants use social and non-social cues when learning object statistics.

    Get PDF
    Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants' attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions
    • 

    corecore