38,471 research outputs found

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    Science and Mathematics Student Research Day 1997

    Get PDF

    The epidemiology of injuries across the weight-training sports

    Get PDF
    Background: Weight-training sports, including weightlifting, powerlifting, bodybuilding, strongman, Highland Games, and CrossFit, are weight-training sports that have separate divisions for males and females of a variety of ages, competitive standards, and bodyweight classes. These sports may be considered dangerous because of the heavy loads commonly used in training and competition. Objectives: Our objective was to systematically review the injury epidemiology of these weight-training sports, and, where possible, gain some insight into whether this may be affected by age, sex, competitive standard, and bodyweight class. Methods: We performed an electronic search using PubMed, SPORTDiscus, CINAHL, and Embase for injury epidemiology studies involving competitive athletes in these weight-training sports. Eligible studies included peer-reviewed journal articles only, with no limit placed on date or language of publication. We assessed the risk of bias in all studies using an adaption of the musculoskeletal injury review method. Results: Only five of the 20 eligible studies had a risk of bias score ≄75 %, meaning the risk of bias in these five studies was considered low. While 14 of the studies had sample sizes >100 participants, only four studies utilized a prospective design. Bodybuilding had the lowest injury rates (0.12–0.7 injuries per lifter per year; 0.24–1 injury per 1000 h), with strongman (4.5–6.1 injuries per 1000 h) and Highland Games (7.5 injuries per 1000 h) reporting the highest rates. The shoulder, lower back, knee, elbow, and wrist/hand were generally the most commonly injured anatomical locations; strains, tendinitis, and sprains were the most common injury type. Very few significant differences in any of the injury outcomes were observed as a function of age, sex, competitive standard, or bodyweight class. Conclusion: While the majority of the research we reviewed utilized retrospective designs, the weight-training sports appear to have relatively low rates of injury compared with common team sports. Future weight-training sport injury epidemiology research needs to be improved, particularly in terms of the use of prospective designs, diagnosis of injury, and changes in risk exposure

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Proceedings of the Salford Postgraduate Annual Research Conference (SPARC) 2011

    Get PDF
    These proceedings bring together a selection of papers from the 2011 Salford Postgraduate Annual Research Conference(SPARC). It includes papers from PhD students in the arts and social sciences, business, computing, science and engineering, education, environment, built environment and health sciences. Contributions from Salford researchers are published here alongside papers from students at the Universities of Anglia Ruskin, Birmingham City, Chester,De Montfort, Exeter, Leeds, Liverpool, Liverpool John Moores and Manchester

    Event Structure In Vision And Language

    Get PDF
    Our visual experience is surprisingly rich: We do not only see low-level properties such as colors or contours; we also see events, or what is happening. Within linguistics, the examination of how we talk about events suggests that relatively abstract elements exist in the mind which pertain to the relational structure of events, including general thematic roles (e.g., Agent), Causation, Motion, and Transfer. For example, “Alex gave Jesse flowers” and “Jesse gave Alex flowers” both refer to an event of transfer, with the directionality of the transfer having different social consequences. The goal of the present research is to examine the extent to which abstract event information of this sort (event structure) is generated in visual perceptual processing. Do we perceive this information, just as we do with more ‘traditional’ visual properties like color and shape? In the first study (Chapter 2), I used a novel behavioral paradigm to show that event roles – who is acting on whom – are rapidly and automatically extracted from visual scenes, even when participants are engaged in an orthogonal task, such as color or gender identification. In the second study (Chapter 3), I provided functional magnetic resonance (fMRI) evidence for commonality in content between neural representations elicited by static snapshots of actions and by full, dynamic action sequences. These two studies suggest that relatively abstract representations of events are spontaneously extracted from sparse visual information. In the final study (Chapter 4), I return to language, the initial inspiration for my investigations of events in vision. Here I test the hypothesis that the human brain represents verbs in part via their associated event structures. Using a model of verbs based on event-structure semantic features (e.g., Cause, Motion, Transfer), it was possible to successfully predict fMRI responses in language-selective brain regions as people engaged in real-time comprehension of naturalistic speech. Taken together, my research reveals that in both perception and language, the mind rapidly constructs a representation of the world that includes events with relational structure

    Target-distractor synchrony affects performance in a novel motor task for studying action selection

    Get PDF
    The study of action selection in humans can present challenges of task design since our actions are usually defined by many degrees of freedom and therefore occupy a large action-space. While saccadic eye-movement offers a more constrained paradigm for investigating action selection, the study of reach-and-grasp in upper limbs has often been defined by more complex scenarios, not easily interpretable in terms of such selection. Here we present a novel motor behaviour task which addresses this by limiting the action space to a single degree of freedom in which subjects have to track (using a stylus) a vertical coloured target line displayed on a tablet computer, whilst ignoring a similarly oriented distractor line in a different colour. We ran this task with 55 subjects and showed that, in agreement with previous studies, the presence of the distractor generally increases the movement latency and directional error rate. Further, we used two distractor conditions according to whether the distractor's location changes asynchronously or synchronously with the location of the target. We found that the asynchronous distractor yielded poorer performance than its synchronous counterpart, with significantly higher movement latencies and higher error rates. We interpret these results in an action selection framework with two actions (move left or right) and competing 'action requests' offered by the target and distractor. As such, the results provide insights into action selection performance in humans and supply data for directly constraining future computational models therein

    Automated and Real Time Subtle Facial Feature Tracker for Automatic Emotion Elicitation

    Get PDF
    This thesis proposed a system for real time detection of facial expressions those are subtle and are exhibited in spontaneous real world settings. The underlying frame work of our system is the open source implementation of Active Appearance Model. Our algorithm operates by grouping the various points provided by AAM into higher level regions constructing and updating a background statistical model of movement in each region, and testing whether current movement in a given region substantially exceeds the expected value of movement in that region (computed from statistical model). Movements that exceed the expected value by some threshold and do not appear to be false alarms due to artifacts (e.g., lighting changes) are considered to be valid changes in facial expressions. These changes are expected to be rough indicators of facial activity that can be complemented by contexual driven predictors of emotion that are derived from spontaneous settings
    • 

    corecore