555 research outputs found
From pixels to affect : a study on games and player experience
Is it possible to predict the affect of a user just
by observing her behavioral interaction through a video? How
can we, for instance, predict a user’s arousal in games by
merely looking at the screen during play? In this paper we
address these questions by employing three dissimilar deep
convolutional neural network architectures in our attempt to
learn the underlying mapping between video streams of gameplay
and the player’s arousal. We test the algorithms in an annotated
dataset of 50 gameplay videos of a survival shooter game and
evaluate the deep learned models’ capacity to classify high vs low
arousal levels. Our key findings with the demanding leave-onevideo-
out validation method reveal accuracies of over 78% on
average and 98% at best. While this study focuses on games and
player experience as a test domain, the findings and methodology
are directly relevant to any affective computing area, introducing
a general and user-agnostic approach for modeling affect.This paper is funded, in part, by the H2020 project Com N Play Science (project no: 787476).peer-reviewe
Recommended from our members
How the globalization of video games is changing the way militaries operate
Today more than ever video games are reaching a global audience. In the past the military facilitated the growth of video games, but that dynamic may be changing. Military use of video games to build and operate their army’s is increasing. The first part of this report will look at how video games have grown into a $100 billion industry and where the industry is heading. The second part of this report will focus in on how the military has utilized video game technology as a recruiting and training tool. Then move into how different components of technology in video games is being used in modern warfare. In the end this report will show how the globalization of video games is changing the way the military operates.Kinesiology and Health Educatio
Automation in Moderation
This Article assesses recent efforts to encourage online platforms to use automated means to prevent the dissemination of unlawful online content before it is ever seen or distributed. As lawmakers in Europe and around the world closely scrutinize platforms’ “content moderation” practices, automation and artificial intelligence appear increasingly attractive options for ridding the Internet of many kinds of harmful online content, including defamation, copyright infringement, and terrorist speech. Proponents of these initiatives suggest that requiring platforms to screen user content using automation will promote healthier online discourse and will aid efforts to limit Big Tech’s power.In fact, however, the regulations that incentivize platforms to use automation in content moderation come with unappreciated costs for civil liberties and unexpected benefits for platforms. The new automation techniques exacerbate existing risks to free speech and user privacy and create ripe new sources of information for surveillance, aggravating threats to free expression, associational rights, religious freedoms, and equality. Automation also worsens transparency and accountability deficits. Far from curtailing private power, the new regulations endorse and expand platform authority to police online speech, with little in the way of oversight and few countervailing checks. New regulations of online intermediaries should therefore incorporate checks on the use of automation to avoid exacerbating these dynamics. Carefully drawn transparency obligations, algorithmic accountability mechanisms, and procedural safeguards can help to ameliorate the effects of these regulations on users and competition
- …