473 research outputs found
Recommended from our members
Unmute This: Circulation, Sociality, and Sound in Viral Media
Cats at keyboards. Dancing hamsters. Giggling babies and dancing flashmobs. A bi-colored dress. Psyâs âGangnam Styleâ music video. Over the final decade of the twentieth century and the first decades of the twenty-first, these and countless other examples of digital audiovisual phenomena have been collectively adjectivally described through a biological metaphor that suggests the speed and ubiquity of their circulationââviral.â This circulation has been facilitated by the internet, and has often been understood as a product of the webâs celebrated capacities for democratic amateur creation, its facilitation of unmediated connection and sharing practices. In this dissertation, I suggest that participation in such phenomenaâthe production, watching, listening to, circulation, or âsharingâ of such objectsâhas constituted a significant site of twenty-first-century musical practice. Borrowing and adapting Christopher Smallâs influential 1998 coinage, I theorize these strands of practice as viral musicking. While scholarship on viral media has tended to center on visual parameters, rendering such phenomena silent, the term âviral musickingâ seeks to draw media theory metaphors of voice and listening into dialogue with musicology, precisely at the intersection of audiovisual objects which are played, heard, listened to.
The projectâs methodology comprises a sonically attuned media archeology, grounded in close readings of internet artifacts and practices; this sonic attunement is afforded through musicological methods, including analyses of genre, aesthetics, and style, discourse analysis, and twenty-first-century reception (micro)histories across a dynamic media assemblage. By analyzing particular ecosystems of platforms, behavior, and devices across the first decades of the twenty-first century, I chart a trajectory in which unpredictable virtual landscapes were tamed into entrenched channels and pathways, enabling a capacious âviralityâ comprising disparate phenomena from simple looping animations to the surprise release of BeyoncĂ©âs 2013 album. Alongside this narrative, I challenge utopian claims of Web 2.0âs digital democratization by explicating the iterative processes through which material, work, and labor were co-opted from amateur content creators and leveraged for the profit of established media and corporate entities.
âUnmute Thisâ articulates two main arguments. First, that virality reified as a concept and set of dynamic-but-predictable processes over the course of the first decades of the twenty-first century; this dissertation charts a cartography of chaos to control, a heterogeneous digital landscape funneled into predictable channels and pathways etched ever more firmly and deeply across the 2010s. Second, that analyzing the musicality of viral objects, attending to the musical and sonic parameters of virally-circulating phenomena, and thinking of viral participation as an extension of musical behavior provide a productive framework for understanding the affective, generic, and social aspects of twenty-first-century virality.
The five chapters of the dissertation present analyses of a series of viral objects, arranged roughly chronologically from the turn of the twenty-first century to the middle of the 2010s. The first chapter examines the loops of animated phenomena from The Dancing Baby to Hampster Dance and the Badgers animation; the second moves from loops to musicalization, considering remixing approaches to the so-called âBus Uncleâ and âBed Intruderâ videos. The third chapter also deals with viral remixing, centering around Rebecca Blackâs âFridayâ video, while the fourth chapter analyzes âunmute thisâ video posts in the context of the mid-2010s social media platform assemblage. The final chapter presents the 2013 surprise release of BeyoncĂ©âs self-titled visual album as an apotheosis to the viral narratives that precede itâa claim that is briefly interrogated in the dissertationâs epilogue
Development and evaluation of computer-based coaching for relational evangelism
https://place.asburyseminary.edu/ecommonsatsdissertations/1281/thumbnail.jp
Practical, appropriate, empirically-validated guidelines for designing educational games
There has recently been a great deal of interest in the
potential of computer games to function as innovative
educational tools. However, there is very little evidence of
games fulfilling that potential. Indeed, the process of
merging the disparate goals of education and games design
appears problematic, and there are currently no practical
guidelines for how to do so in a coherent manner. In this
paper, we describe the successful, empirically validated
teaching methods developed by behavioural psychologists
and point out how they are uniquely suited to take
advantage of the benefits that games offer to education. We
conclude by proposing some practical steps for designing
educational games, based on the techniques of Applied
Behaviour Analysis. It is intended that this paper can both
focus educational games designers on the features of games
that are genuinely useful for education, and also introduce a
successful form of teaching that this audience may not yet
be familiar with
Medulla: A 2D sidescrolling platformer game that teaches basic brain structure and function
This article explores the design and instructional effectiveness of Medulla, an educational game meant to teach brain structure and function to undergraduate psychology students. Developed in the retro-style platformer genre, Medulla uses two-dimensional gameplay with pixel-based graphics to engage students in learning content related to the brain, information which is often pre-requisite to more rigorous psychological study. A pretest posttest design was used in an experiment assessing Medullaâs ability to teach psychology content. Results indicated content knowledge was significantly higher on the posttest than the pretest, with a large effect size. Medulla appears to be an effective learning tool. These results have important implications in the design of educational psychology games and for educational game designers and artists exploring the possibility of using a two-dimensional retro-style structure
Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters
"Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many peopleâs household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies
Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters
"Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many peopleâs household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies
- âŠ