7,531 research outputs found

    Emotional experiences in youth tennis

    Get PDF
    Abstract Objectives To explore adolescents' emotional experiences in competitive sport. Specifically, this study sought to identify, 1) The emotions adolescents' experience at tennis tournaments, 2) The precursors of the emotions they experience, and, 3) How adolescents attempt to cope with these emotions. Design Case-study. Method Four adolescent tennis players competed in four or five tennis matches under the observation of a researcher. Immediately following each match, participants completed a post-match review sheet and a semi-structured interview. A further semi-structured interview was completed at the end of the tournament. Review sheets, notes from match observations, and video recordings of matches were used to stimulate discussions during final interviews. All data were analyzed following the procedures outlined by Miles and Huberman (1994). Results Participants cited numerous positively and negatively valenced emotions during matches and tournaments. Participants' emotions seemed to be broadly influenced by their perceptions of performance and outcomes, as well as their opponent's behavior and player's perceptions of their own behavior. Participants described various strategies to cope with these emotions, such as controlling breathing rate, focusing on positive thoughts, and individualized routines. Further, if participants perceived them to be facilitative, negative emotions could be beneficial for performance. Conclusion This study provided original insights into the complexity of adolescent athletes' emotional experiences at competitions and highlights the critical need for further in-depth examinations of youth sport to fully comprehend the experiences of young people. Most notably, the findings highlight the necessity of considering the impact of both intra- and interpersonal influences on adolescents' emotional experiences, while also accounting for temporal changes

    Semantic Analysis of Facial Gestures from Video Using a Bayesian Framework

    Get PDF
    The continuous growth of video technology has resulted in increased research into the semantic analysis of video. The multimodal property of the video has made this task very complex. The objective of this thesis was to research, implement and examine the underlying methods and concepts of semantic analysis of videos and improve upon the state of the art in automated emotion recognition by using semantic knowledge in the form of Bayesian inference. The main domain of analysis is facial emotion recognition from video, including both visual and vocal aspects of facial gestures. The goal is to determine if an expression on a person\u27s face in a sequence of video frames is happy, sad, angry, fearful or disgusted. A Bayesian network classification algorithm was designed and used to identify and understand facial expressions in video. The Bayesian network is an attractive choice because it provides a probabilistic environment and gives information about uncertainty from knowledge about the domain. This research contributes to current knowledge in two ways: by providing a novel algorithm that uses edge differences to extract keyframes in video and facial features from the keyframe, and by testing the hypothesis that combining two modalities (vision with speech) yields a better classification result (low false positive rate and high true positive rate) than either modality used alone

    Motion and emotion : Semantic knowledge for hollywood film indexing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Textual and Visual Representation of Hijab in Internet Memes and GIFs

    Get PDF
    This study provides a preliminary report of veil/hijab representation in the modern social media tools of communication; internet memes and GIFs. It bridges a gap in visual communication research by conducting an integrative -textual and visual- framing analysis of 400 memes and GIFs that used the hashtag #Hijab, to unravel the frames and stereotypes of veiled women in such online visuals. Hijabi Muslim women have been visually represented in media in overgeneralized stereotyped ways, being shown as either oppressed and subservient to others with no individual opinions, or as liberated progressives who resist western hegemony (Khan & Zahra, 2015). The research timeframe comes right after the two terrorist attacks on Muslim mosques in Christchurch, New Zealand, that occurred on 15 March 2019, where an extremist Australian gunman killed 50 people and injured another 50 in the first ever livestreamed shooting video on Facebook (BBC, 2019). Utilizing a visual analysis dual-modality technique, of both textual and visual elements, and through conducting a quantitative content analysis of the most popular, viral, and retweeted hijab memes and GIFs in March 2019, the study contributes to the growing literature of memes and GIFs, and their representation of Muslim women and their body coverage hijab . It, therefore, allows for a deeper understanding of internet memes and GIF\u27 usage, the frames they used in portraying hijab, and their stereotypical effects on the image of contemporary veil and veiled women on digital media, specifically social media platforms. The study codes a sample of 200 internet memes and 200 GIFs based on 9 coding variables to analyze both textual and visual elements. Findings highlight how veil/hijab is represented in modern digital communication tools and suggest that, opposite to negative stereotypes of Muslim women in traditional media, memes and GIFs support hijab and depict veiled Muslim women as happy and respected females. The study also shows that internet memes and GIFs are not the same thing and should be examined accordingly

    Affect-based indexing and retrieval of multimedia data

    Get PDF
    Digital multimedia systems are creating many new opportunities for rapid access to content archives. In order to explore these collections using search, the content must be annotated with significant features. An important and often overlooked aspect o f human interpretation o f multimedia data is the affective dimension. The hypothesis o f this thesis is that affective labels o f content can be extracted automatically from within multimedia data streams, and that these can then be used for content-based retrieval and browsing. A novel system is presented for extracting affective features from video content and mapping it onto a set o f keywords with predetermined emotional interpretations. These labels are then used to demonstrate affect-based retrieval on a range o f feature films. Because o f the subjective nature o f the words people use to describe emotions, an approach towards an open vocabulary query system utilizing the electronic lexical database WordNet is also presented. This gives flexibility for search queries to be extended to include keywords without predetermined emotional interpretations using a word-similarity measure. The thesis presents the framework and design for the affectbased indexing and retrieval system along with experiments, analysis, and conclusions

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    A COMPUTATION METHOD/FRAMEWORK FOR HIGH LEVEL VIDEO CONTENT ANALYSIS AND SEGMENTATION USING AFFECTIVE LEVEL INFORMATION

    No full text
    VIDEO segmentation facilitates e±cient video indexing and navigation in large digital video archives. It is an important process in a content-based video indexing and retrieval (CBVIR) system. Many automated solutions performed seg- mentation by utilizing information about the \facts" of the video. These \facts" come in the form of labels that describe the objects which are captured by the cam- era. This type of solutions was able to achieve good and consistent results for some video genres such as news programs and informational presentations. The content format of this type of videos is generally quite standard, and automated solutions were designed to follow these format rules. For example in [1], the presence of news anchor persons was used as a cue to determine the start and end of a meaningful news segment. The same cannot be said for video genres such as movies and feature films. This is because makers of this type of videos utilized different filming techniques to design their videos in order to elicit certain affective response from their targeted audience. Humans usually perform manual video segmentation by trying to relate changes in time and locale to discontinuities in meaning [2]. As a result, viewers usually have doubts about the boundary locations of a meaningful video segment due to their different affective responses. This thesis presents an entirely new view to the problem of high level video segmentation. We developed a novel probabilistic method for affective level video content analysis and segmentation. Our method had two stages. In the first stage, a®ective content labels were assigned to video shots by means of a dynamic bayesian 0. Abstract 3 network (DBN). A novel hierarchical-coupled dynamic bayesian network (HCDBN) topology was proposed for this stage. The topology was based on the pleasure- arousal-dominance (P-A-D) model of a®ect representation [3]. In principle, this model can represent a large number of emotions. In the second stage, the visual, audio and a®ective information of the video was used to compute a statistical feature vector to represent the content of each shot. Affective level video segmentation was achieved by applying spectral clustering to the feature vectors. We evaluated the first stage of our proposal by comparing its emotion detec- tion ability with all the existing works which are related to the field of a®ective video content analysis. To evaluate the second stage, we used the time adaptive clustering (TAC) algorithm as our performance benchmark. The TAC algorithm was the best high level video segmentation method [2]. However, it is a very computationally intensive algorithm. To accelerate its computation speed, we developed a modified TAC (modTAC) algorithm which was designed to be mapped easily onto a field programmable gate array (FPGA) device. Both the TAC and modTAC algorithms were used as performance benchmarks for our proposed method. Since affective video content is a perceptual concept, the segmentation per- formance and human agreement rates were used as our evaluation criteria. To obtain our ground truth data and viewer agreement rates, a pilot panel study which was based on the work of Gross et al. [4] was conducted. Experiment results will show the feasibility of our proposed method. For the first stage of our proposal, our experiment results will show that an average improvement of as high as 38% was achieved over previous works. As for the second stage, an improvement of as high as 37% was achieved over the TAC algorithm
    corecore