10 research outputs found

    Implicit emotional tagging of multimedia using EEG signals and brain computer interface

    Get PDF
    In multimedia content sharing social networks, tags assigned to content play an important role in search and retrieval. In other words, by annotating multimedia content, users can associate a word or a phrase (tag) with that resource such that it can be searched for efficiently. Implicit tagging refers to assigning tags by observing subjects behavior during consumption of multimedia content. This is an alternative to traditional explicit tagging which requires an explicit action by subjects. In this paper we propose a brain-computer interface (BCI) system based on P300 evoked potential, for implicit emotional tagging of multimedia content. We show that our system can successfully perform implicit emotional tagging and naïve subjects who have not participated in training of the system can also use it efficiently. Moreover, we introduce a subjective metric called “emotional taggability” to analyze the recognition performance of the system, given the degree of ambiguity that exists in terms of emotional values associated with a multimedia content

    An Ambient Multimedia User Experience Feedback Framework Based on User Tagging and EEG Biosignals

    Get PDF
    Multimedia is increasingly accessed online and within social networks; however, users are typically limited to visual/auditory stimulus through media presented onscreen with accompanying audio over speakers. Whilst recent research studying additional ambient sensory multimedia effects recorded numerical scores of perceptual quality, the users’ time-varying emotional response to the ambient sensory feedback is not considered. This paper thus introduces a framework to evaluate user ambient quality of multimedia experience and discover users’ time-varying emotional responses through explicit user tagging and implicit EEG biosignal analysis. In the proposed framework, users interact with the media via discrete tagging activities whilst their EEG biosignal emotional feedback is continuously monitored in-between user tagging events with emotional states correlated with media content and tags

    User-centered EEG-based multimedia quality assessment

    Full text link
    Multimedia users are becoming increasingly quality-aware as the technological advances make ubiquitous the creation and delivery of high-definition multimedia content. While much research work has been conducted on multimedia quality assessment, most of the existing solutions come with their own limitations, with particular solutions being more suitable to assess particular aspects related to user's Quality of Experience (QoE). In this context, there is an increasing need for innovative solutions to assess user's QoE with multimedia services. This paper proposes the QoE-EEG-Analyser that provides a solution to automatically assess and quantify the impact of various factors contributing to user's QoE with multimedia services. The proposed approach makes use of participant's frustration level measured with a consumer-grade EEG system, the Emotiv EPOC. The main advantage of QoE-EEG-Analyser is that it enables continuous assessment of various QoE factors over the entire testing duration, in a non-invasive way, without requiring the user to provide input about his perceived visual quality. Preliminary subjective results have shown that frustration can indicate user's perceived QoE

    The Evolving Interplay between Social Media and International Health Security: A Point of View

    Get PDF
    Human communication and interaction had been rapidly evolving with the advent and continuing influence of social media (SM) thereby accelerating information exchange and increasing global connectivity. Despite clear advantages, this new technology can present unintended consequences including medical misinformation and “fake news.” Although International Health Security (IHS) stands to benefit tremendously from various SM platforms, high-level decision-makers and other stakeholders must also be aware of the dangers related to its intentional and unintentional misuse (and abuse). An overview of SM utility in fighting disease, disseminating life-saving information, and organizing people and teams in a constructive fashion is discussed herein. The potential negatives associated with SM misuse, including intentional and unintentional misinformation, as well as the ability to organize people in a disruptive fashion, will also be presented. Our treatise will additionally outline how deliberate misinformation may lead to harmful behaviors, public health panics, and orchestrated patterns of distrust. In terms of both its affirmative and destructive considerations, SM can be viewed as an asymmetric influencing force, with observed effects (whether beneficial or harmful) being disproportionately greater than the cost of the intervention

    Influencing human affective responses to dynamic virtual environments

    Get PDF
    Detecting and measuring emotional responses while interacting with virtual reality (VR), and assessing and interpreting their impacts on human engagement and “immersion,” are both academically and technologically challenging. While many researchers have, in the past, focused on the affective evaluation of passive environments, such as listening to music or the observation of videos and imagery, virtual realities and related interactive environments have been used in only a small number of research studies as a mean of presenting emotional stimuli. This article reports the first stage (focusing on participants' subjective responses) of a range of experimental investigations supporting the evaluation of emotional responses within a virtual environment, according to a three-dimensional (Valence, Arousal, and Dominance) model of affects, developed in the 1970s and 1980s. To populate this three-dimensional model with participants' emotional responses, an “affective VR,” capable of manipulating users' emotions, has been designed and subjectively evaluated. The VR takes the form of a dynamic “speedboat” simulation, elements (controllable VR parameters) of which were assessed and selected based on a 35-respondent online survey, coupled with the implementation of an affective power approximation algorithm. A further 68 participants took part in a series of trials, interacting with a number of VR variations, while subjectively rating their emotional responses. The experimental results provide an early level of confidence that this particular affective VR is capable of manipulating individuals' emotional experiences, through the control of its internal parameters. Moreover, the approximation technique proved to be fairly reliable in predicting users' potential emotional responses, in various affective VR settings, prior to actual experiences. Finally, the analysis suggested that the emotional response of the users, with different gender and gaming experiences, could vary, when presented with the same affective VR situation. </jats:p

    Saliency Map for Visual Perception

    Get PDF
    Human and other primates move their eyes to select visual information from the scene, psycho-visual experiments (Constantinidis, 2005) suggest that attention is directed to visually salient locations in the image. This allows human beings to bring the fovea onto the relevant parts of the image, to interpret complex scenes in real time. In visual perception, an important result was the discovery of a limited set of visual properties (called pre attentive), detected in the first 200-300 milliseconds of observation of a scene, by the low-level visual system. In last decades many progresses have been made into research of visual perception by analyzing both bottom up (stimulus driven) and top down (task dependent) processes involved in human attention. Visual Saliency deals with identifying fixation points that a human viewer would focus on the first seconds of the observation of a scene

    Implicit image annotation by using gaze analysis

    Get PDF
    PhDThanks to the advances in technology, people are storing a massive amount of visual information in the online databases. Today it is normal for a person to take a photo of an event with their smartphone and effortlessly upload it to a host domain. For later quick access, this enormous amount of data needs to be indexed by providing metadata for their content. The challenge is to provide suitable captions for the semantics of the visual content. This thesis investigates the possibility of extracting and using the valuable information stored inside human’s eye movements when interacting with digital visual content in order to provide information for image annotation implicitly. A non-intrusive framework is developed which is capable of inferring gaze movements to classify the visited images by a user into two classes when the user is searching for a Target Concept (TC) in the images. The first class is formed of the images that contain the TC and it is called the TC+ class and the second class is formed of the images that do not contain the TC and it is called the TC- class. By analysing the eye-movements only, the developed framework was able to identify over 65% of the images that the subject users were searching for with the accuracy over 75%. This thesis shows that the existing information in gaze patterns can be employed to improve the machine’s judgement of image content by assessment of human attention to the objects inside virtual environments.European Commission funded Network of Excellence PetaMedi

    Interactive video retrieval using implicit user feedback.

    Get PDF
    PhDIn the recent years, the rapid development of digital technologies and the low cost of recording media have led to a great increase in the availability of multimedia content worldwide. This availability places the demand for the development of advanced search engines. Traditionally, manual annotation of video was one of the usual practices to support retrieval. However, the vast amounts of multimedia content make such practices very expensive in terms of human effort. At the same time, the availability of low cost wearable sensors delivers a plethora of user-machine interaction data. Therefore, there is an important challenge of exploiting implicit user feedback (such as user navigation patterns and eye movements) during interactive multimedia retrieval sessions with a view to improving video search engines. In this thesis, we focus on automatically annotating video content by exploiting aggregated implicit feedback of past users expressed as click-through data and gaze movements. Towards this goal, we have conducted interactive video retrieval experiments, in order to collect click-through and eye movement data in not strictly controlled environments. First, we generate semantic relations between the multimedia items by proposing a graph representation of aggregated past interaction data and exploit them to generate recommendations, as well as to improve content-based search. Then, we investigate the role of user gaze movements in interactive video retrieval and propose a methodology for inferring user interest by employing support vector machines and gaze movement-based features. Finally, we propose an automatic video annotation framework, which combines query clustering into topics by constructing gaze movement-driven random forests and temporally enhanced dominant sets, as well as video shot classification for predicting the relevance of viewed items with respect to a topic. The results show that exploiting heterogeneous implicit feedback from past users is of added value for future users of interactive video retrieval systems

    Affective virtual environments: a psychophysiological HCI system concept

    Get PDF
    The recent “resurrection” of interest in Virtual Reality has stimulated interest in the quest for true “immersion” in computer-generated worlds. True immersion may only ever be achieved through advanced BCI systems, but, until that day arrives, it is important to understand how it may be possible to measure human engagement and emotions within virtual worlds using psychophysiological techniques. This study aims to design an affective computing system, capable of responding to human emotions, within virtual environments. Based on the development of a Valence/Arousal model, a controllable affective VR, capable of evoking multiple emotions, has been constructed. Multiple variations of the VR have been evaluated subjectively using over 68 participants. More objective, physiologically-based experiments have been executed, in which the EEG, GSR and heart rates of 45 participants have been recorded during exposure to the most powerful affective environments, identified in the earlier study. Multiple affective recognition systems have been trained and crossvalidated against 30 participants and evaluated using the other 15 individuals. The results suggested that the trained classifiers perform highly accurately in the training database, but achieve random classification accuracies in the new dataset. It was highlighted that the extreme performance attenuation is due to the high individual differences in participants’ physiological responses, in emotional experiences
    corecore