29,042 research outputs found
Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data
Recently, researchers started using cognitive load in various settings, e.g.,
educational psychology, cognitive load theory, or human-computer interaction.
Cognitive load characterizes a tasks' demand on the limited information
processing capacity of the brain. The widespread adoption of eye-tracking
devices led to increased attention for objectively measuring cognitive load via
pupil dilation. However, this approach requires a standardized data processing
routine to reliably measure cognitive load. This technical report presents
CEP-Web, an open source platform to providing state of the art data processing
routines for cleaning pupillary data combined with a graphical user interface,
enabling the management of studies and subjects. Future developments will
include the support for analyzing the cleaned data as well as support for
Task-Evoked Pupillary Response (TEPR) studies
Opinion mining and sentiment analysis in marketing communications: a science mapping analysis in Web of Science (1998–2018)
Opinion mining and sentiment analysis has become ubiquitous in our society, with
applications in online searching, computer vision, image understanding, artificial intelligence and
marketing communications (MarCom). Within this context, opinion mining and sentiment analysis
in marketing communications (OMSAMC) has a strong role in the development of the field by
allowing us to understand whether people are satisfied or dissatisfied with our service or product
in order to subsequently analyze the strengths and weaknesses of those consumer experiences. To
the best of our knowledge, there is no science mapping analysis covering the research about opinion
mining and sentiment analysis in the MarCom ecosystem. In this study, we perform a science
mapping analysis on the OMSAMC research, in order to provide an overview of the scientific work
during the last two decades in this interdisciplinary area and to show trends that could be the basis
for future developments in the field. This study was carried out using VOSviewer, CitNetExplorer
and InCites based on results from Web of Science (WoS). The results of this analysis show the
evolution of the field, by highlighting the most notable authors, institutions, keywords,
publications, countries, categories and journals.The research was funded by Programa Operativo FEDER Andalucía 2014‐2020, grant number “La
reputación de las organizaciones en una sociedad digital. Elaboración de una Plataforma Inteligente para la
Localización, Identificación y Clasificación de Influenciadores en los Medios Sociales Digitales (UMA18‐
FEDERJA‐148)” and The APC was funded by the same research gran
Learning Visual Importance for Graphic Designs and Data Visualizations
Knowing where people look and click on visual designs can provide clues about
how the designs are perceived, and where the most important or relevant content
lies. The most important content of a visual design can be used for effective
summarization or to facilitate retrieval from a database. We present automated
models that predict the relative importance of different elements in data
visualizations and graphic designs. Our models are neural networks trained on
human clicks and importance annotations on hundreds of designs. We collected a
new dataset of crowdsourced importance, and analyzed the predictions of our
models with respect to ground truth importance and human eye movements. We
demonstrate how such predictions of importance can be used for automatic design
retargeting and thumbnailing. User studies with hundreds of MTurk participants
validate that, with limited post-processing, our importance-driven applications
are on par with, or outperform, current state-of-the-art methods, including
natural image saliency. We also provide a demonstration of how our importance
predictions can be built into interactive design tools to offer immediate
feedback during the design process
Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)
Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by
computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position
and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and
temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques
for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy
and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level
Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only
low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as
periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach
and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the
relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and
shared publicly on the Web
Investigating the effectiveness of an efficient label placement method using eye movement data
This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user
- …