73 research outputs found
New frontiers in neuromarketing research:Benefit and potential applications of GRAIL
Recent years has seen an explosion in the application of neuroscience techniques to market research, known as neuromarketing. The aim of this paper is to contribute to both theoretical and practical aspects of neuromarketing research by presenting a new and innovative neuroscience tool for studying marketing-relevant behavior, namely GRAIL. GRAIL combines different devices (e.g. EEG, ET, facial EMG) into one single real-time device. It can help researchers and practitioners to measure physiological responses (external reflexes) and brain activity (internal reflexes) simultaneously. We argue that this new tool can improve neuromarketing research in several ways, namely in reducing the costs of neuromarketing research, improving the efficiency and accuracy of neuromarketing experiments, and recreating real-life purchase experiences using virtual reality and personalized scenarios
Leveraging analytics to produce compelling and profitable film content
Producing compelling film content profitably is a top priority to the long-term prosperity of the film industry. Advances in digital technologies, increasing availabilities of granular big data, rapid diffusion of analytic techniques, and intensified competition from user generated content and original content produced by Subscription Video on Demand (SVOD) platforms have created unparalleled needs and opportunities for film producers to leverage analytics in content production. Built upon the theories of value creation and film production, this article proposes a conceptual framework of key analytic techniques that film producers may engage throughout the production process, such as script analytics, talent analytics, and audience analytics. The article further synthesizes the state-of-the-art research on and applications of these analytics, discuss the prospect of leveraging analytics in film production, and suggest fruitful avenues for future research with important managerial implications
Decoding the consumerās brain: Neural representations of consumer experience
Understanding consumer experience ā what consumers think about brands, how they feel about services, whether they like certain products ā is crucial to marketing practitioners. āNeuromarketingā, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumersā minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ādecodeā their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brandās image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s
Neurocinematics as passive-BCI based Applicaition : The EEG study on neural responses of human during watching movie
Department of Human Factors EngineeringTraditionally, brain-computer interface is mostly composed of researches for the rehabilitation of paralyzed patients with the objective of controlling bio-signal through external devices. However, recently the concept of the research has widened to understand the user???s processing of cognitive emotional information for non-medical purposes and has classified this as manual brain-computer interface. Among these, neurocinematics is a manual brain-computer interface???s applied research field which tries to understand the changes in the cognitive or emotional state of the viewer while watching a movie. There are two main reasons why this field of study is receiving particular attention recently. First, movies do not only have audio-visual stimulus, but are composed of different factors such as culture and environment, and it can help in studying the human???s social cognitive process. Another reason is that the original survey or post-interview method of audience reviewing about movies has a limitation - audience should be aware of their conditionsso the credibility is low. On the other hand, neurocinematics studies observe it through bio-signal and believes that a more objective verification is possible. However, in existing researches, they mostly used the method of validating findings by comparing the results of the neurocinematics research through bio-signal with the original result from the survey. Also, a lot of researches were done to know if most subjects made the same reaction while watching the same movie, but they obtained the bio-signal through individual viewing.
This research has verified the objectivity of engagement index extraction through the introduction of psychophysical methods to overcome the limitations of existing studies. While the subjects were watching a movie in one room, their brainwaves were measured. Changes in the brainwave synchronization levels between the subjects were also checked. Moreover, we proved the changes in the level of brainwave synchronization in two conditions: when people are watching individually and when they are watching in a group.
In the first experiment, we used a psychophysical method called Secondary Task Reaction Time (STRT), which is known for representing concentration to evaluate Neural Engagement Index (NEI). STRT is used to check the reaction speed for a tactile stimulus given additionally while the subject is doing the main task. It is known that the reaction rate gets slower when the subject is more engaged to the main task. In this experiment, we measured the STRT and NEI while the subjects are watching 8 movie trailers that are not yet out in the cinema. After watching each trailer, the subjects completed the survey. As a result, there was a significant correlation between STRT and NEI, but in the survey, there was no meaningful correlation.
In the second experiment, while four subjects were simultaneously watching the Chaser (2008, Silk road), their brainwaves were measured and analyzed 5 per frequency band different inter subject correlation (ISC). Moreover, using sliding show method, we analyzed the correlation between time: Delta2~4Hz), Theta (4~8Hz), Alpha (8~13Hz), low Beta (13~18Hz), high Beta(18~23Hz). Also, to verify the derived correlation???s significance, the result of checking the correlation of the data that had been time shifted on each window and its 95% range through non-parametric permutation analysis, the researchers observed time slots that had specific significance on each band correlation. As a result, in the span of the movie???s whole running time, researchers observed that there were parts where a significant correlation increased between the subjects??? band power. Especially there were a lot of meaningful correlations found in the movie when it reaches the emotional climax, and they were important scenes in the movie when it comes to development of plot, these scenes the same with the scenes picked by the majority of the audience.
In the third experiment, based on the result of the second experiment, the researchers checked if the audience???s reaction changes depending on the viewing conditions when watching the same movie content by applying a brainwave-based response model. The viewing conditions were divided into a group of people who watched the movie together, and the viewing group who watched the movie in separate rooms. We proceeded with the experiment after recruiting 8 subjects per group. The data of two groups watching together and one group watching individually was built. We analyzed the collective responses to the applied brainwave frequency inter subject correlation coefficient of each group: Delta (2~4Hz), Theta (4~8Hz), Alpha (8~13Hz), low Beta (13~18Hz), high Beta (18~23Hz). During the whole period of watching the movie, as the result of analyzing the rate of ISC increased significantly, the rate of ISC increase of the group that watched the movie together had a higher ISC increase rate than the group that watched the movie individually, in all frequency bandsope
Neuromarketing for a better understanding of consumer needs and emotions
In this paper we are talking about the fact that marketing and publicity specialists have become aware of the limitations of traditional market research methods for decades, but only in recent years science has allowed the development of a more effective mechanism by which consumers' thoughts can be deciphered: neuromarketing. This term refers to the use of techniques developed by cognitive neuroscience and psychology specialists to analyze and understand people's reactions to products and promotions, which allows refining marketing efforts to make them more effective. In the article we are talking about the tools used for this purpose, which include magnetic resonance imaging (MRI), brain scanners that identify brain parts that react to different stimuli, and electroencephalography (EEG), devices that measure electrical activity in the brain. By tracking brain reactions to different stimuli, researchers can discover the marketing mechanisms that are most likely to lead to the desired outcome: selling the product. For this, in parallel with the EEG measurements, an eye-tracking device is used, which allows the exact identification of the stimulus that produces the reaction from that moment. Also, some neuromarketing companies also use GSR (galvanic skin response) sensors to measure the electrical conductivity of the skin, which is another element that provides information about the consumer's response to various commercial messages. The purpose of our article is to show the role played by neuromarketing in the correct understanding of consumer needs, words and emotions
A novel Big Data analytics and intelligent technique to predict driver's intent
Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars
Paving the Way for Mindreading: Re-Interpreting Coercion in Article 17 of the Third Geneva Convention
Mind-reading is no longer a concept confined to the world of science-fiction: Brain reading technologies are rapidly being developed in a number of neuroscience fields. One obvious application is to the field of criminal justice: Mind-reading technology can potentially aid investigators in assessing critical legal questions such as guilt, legal insanity, and the risk of recidivism. Two current techniques have received the most scholarly attention for their potential in aiding interrogators in determining guilt: brain-based lie detection and brain-based memory detection. The growing ability to peer inside someone\u27s mind raises significant legal issues. A number of American scholars, especially in the past fifteen years, have debated the constitutionality of forensically employing mind-reading technologies on United States citizens. Almost no scholarly attention, however, has focused on the legality of mind-reading technologies under international humanitarian law.
This Note seeks to fill this gap in the literature and explores whether the administration of mind-reading technologies on a prisoner of war ( POW ) in an armed conflict violates international humanitarian lawāarguing that an interpretation of coercion more faithful to the text and purpose of Article III would likely permit the application of mind-reading technology during interrogations. Part I briefly lays out the two prevailing interpretations of coercion, noting their implications for the legality of mind-reading technologies, and this Note\u27s interpretation of coercion, which markedly differs from the prevailing interpretations. Part II briefly expands upon the technology discussed in Part Iānoting the potential for more accurate mind-reading technology in the future and the applicability of this technology to interrogations. Part III examines current interpretations of the term coercion, formulates a new definition by looking at the text and drafting history of Article 17, and contends that the coercion ban is meant to protect POWs from physical and mental suffering. Part IV then applies this new definition of coercion to various mind-reading technologies, concluding that the painless use of mind-reading technology does not violate Article 17 of Geneva III
Data Descriptor: A resource for assessing information processing in the developing brain using EEG and eye tracking
We present a dataset combining electrophysiology and eye tracking intended as a resource for the investigation of information processing in the developing brain. The dataset includes high-density taskbased and task-free EEG, eye tracking, and cognitive and behavioral data collected from 126 individuals (ages: 6ā44). The task battery spans both the simple/complex and passive/active dimensions to cover a range of approaches prevalent in modern cognitive neuroscience. The active task paradigms facilitate principled deconstruction of core components of task performance in the developing brain, whereas the passive paradigms permit the examination of intrinsic functional network activity during varying amounts of external stimulation. Alongside these neurophysiological data, we include an abbreviated cognitive test battery and questionnaire-based measures of psychiatric functioning. We hope that this dataset will lead to the development of novel assays of neural processes fundamental to information processing, which can be used to index healthy brain development as well as detect pathologic processes
Recommended from our members
Large-scale Affective Computing for Visual Multimedia
In recent years, Affective Computing has arisen as a prolific interdisciplinary field for engineering systems that integrate human affections. While human-computer relationships have long revolved around cognitive interactions, it is becoming increasingly important to account for human affect, or feelings or emotions, to avert user experience frustration, provide disability services, predict virality of social media content, etc. In this thesis, we specifically focus on Affective Computing as it applies to large-scale visual multimedia, and in particular, still images, animated image sequences and video streams, above and beyond the traditional approaches of face expression and gesture recognition. By taking a principled psychology-grounded approach, we seek to paint a more holistic and colorful view of computational affect in the context of visual multimedia. For example, should emotions like 'surprise' and `fear' be assumed to be orthogonal output dimensions? Or does a 'positive' image in one culture's view elicit the same feelings of positivity in another culture? We study affect frameworks and ontologies to define, organize and develop machine learning models with such questions in mind to automatically detect affective visual concepts.
In the push for what we call "Big Affective Computing," we focus on two dimensions of scale for affect -- scaling up and scaling out -- which we propose are both imperative if we are to scale the Affective Computing problem successfully. Intuitively, simply increasing the number of data points corresponds to "scaling up". However, less intuitive, is when problems like Affective Computing "scale out," or diversify. We show that this latter dimension of introducing data variety, alongside the former of introducing data volume, can yield particular insights since human affections naturally depart from traditional Machine Learning and Computer Vision problems where there is an objectively truthful target. While no one might debate a picture of a 'dog' should be tagged as a 'dog,' but not all may agree that it looks 'ugly'. We present extensive discussions on why scaling out is critical and how it can be accomplished while in the context of large-volume visual data.
At a high-level, the main contributions of this thesis include:
Multiplicity of Affect Oracles:
Prior to the work in this thesis, little consideration has been paid to the affective label generating mechanism when learning functional mappings between inputs and labels. Throughout this thesis but first in Chapter 2, starting in Section 2.1.2, we make a case for a conceptual partitioning of the affect oracle governing the label generation process in Affective Computing problems resulting a multiplicity of oracles, whereas prior works assumed there was a single universal oracle. In Chapter 3, the differences between intended versus expressed versus induced versus perceived emotion are discussed, where we argue that perceived emotion is particularly well-suited for scaling up because it reduces the label variance due to its more objective nature compared to other affect states. And in Chapter 4 and 5, a division of the affect oracle along cultural lines with manifestations along both language and geography is explored. We accomplish all this without sacrificing the 'scale up' dimension, and tackle significantly larger volume problems than prior comparable visual affective computing research.
Content-driven Visual Affect Detection:
Traditionally, in most Affective Computing work, prediction tasks use psycho-physiological signals from subjects viewing the stimuli of interest, e.g., a video advertisement, as the system inputs. In essence, this means that the machine learns to label a proxy signal rather than the stimuli itself. In this thesis, with the rise of strong Computer Vision and Multimedia techniques, we focus on the learning to label the stimuli directly without a human subject provided biometric proxy signal (except in the unique circumstances of Chapter 7). This shift toward learning from the stimuli directly is important because it allows us to scale up with much greater ease given that biometric measurement acquisition is both low-throughput and somewhat invasive while stimuli are often readily available. In addition, moving toward learning directly from the stimuli will allow researchers to precisely determine which low-level features in the stimuli are actually coupled with affect states, e.g., which set of frames caused viewer discomfort rather a broad sense that a video was discomforting. In Part I of this thesis, we illustrate an emotion prediction task with a psychology-grounded affect representation. In particular, in Chapter 3, we develop a prediction task over semantic emotional classes, e.g., 'sad,' 'happy' and 'angry,' using animated image sequences given annotations from over 2.5 million users. Subsequently, in Part II, we develop visual sentiment and adjective-based semantics models from million-scale digital imagery mined from a social multimedia platform.
Mid-level Representations for Visual Affect:
While discrete semantic emotions and sentiment are classical representations of affect with decades of psychology grounding, the interdisciplinary nature of Affective Computing, now only about two decades old, allows for new avenues of representation. Mid-level representations have been proposed in numerous Computer Vision and Multimedia problems as an intermediary, and often more computable, step toward bridging the semantic gap between low-level system inputs and high-level label semantic abstractions. In Part II, inspired by this work, we adapt it for vision-based Affective Computing and adopt a semantic construct called adjective-noun pairs. Specifically, in Chapter 4, we explore the use of such adjective-noun pairs in the context of a social multimedia platform and develop a multilingual visual sentiment ontology with over 15,000 affective mid-level visual concepts across 12 languages associated with over 7.3 million images and representations from over 235 countries, resulting in the largest affective digital image corpus in both depth and breadth to date. In Chapter 5, we develop computational methods to predict such adjective-noun pairs and also explore their usefulness in traditional sentiment analysis but with a previously unexplored cross-lingual perspective. And in Chapter 6, we propose a new learning setting called 'cross-residual learning' building off recent successes in deep neural networks, and specifically, in residual learning; we show that cross-residual learning can be used effectively to jointly learn across even multiple related tasks in object detection (noun), more traditional affect modeling (adjectives), and affective mid-level representations (adjective-noun pairs), giving us a framework for better grounding the adjective-noun pair bridge in both vision and affect simultaneously
- ā¦