182 research outputs found

    Videogames: the new GIS?

    Get PDF
    Videogames and GIS have more in common than might be expected. Indeed, it is suggested that videogame technology may not only be considered as a kind of GIS, but that in several important respects its world modelling capabilities out-perform those of most GIS. This chapter examines some of the key differences between videogames and GIS, explores a number of perhaps-surprising similarities between their technologies, and considers which ideas might profitably be borrowed from videogames to improve GIS functionality and usability

    Body into Narrative: Behavioral and Neurophysiological Signatures of Action Text Processing After Ecological Motor Training

    Get PDF
    Available online 8 November 2022Embodied cognition research indicates that sensorimotor training can influence action concept processing. Yet, most studies employ isolated (pseudo)randomized stimuli and require repetitive single-effector responses, thus lacking ecological validity. Moreover, the neural signatures of these effects remain poorly understood. Here, we examined whether immersive bodily training can modulate behavioral and functional connectivity correlates of action-verb processing in naturalistic narratives. The study involved three phases. First, in the Pretraining phase, 32 healthy persons listened to an action text (rich in movement descriptions) and a non-action text (focused on its characters’ perceptual and mental processes), completed comprehension questionnaires, and underwent resting-state electroencephalogram (EEG) recordings. Second, in the four-day Training phase, half the participants completed an exergaming intervention (eliciting full-body movements for 60 min a day) while the remaining half played static videogames (requiring no bodily engagement other than button presses). Finally, in the Post-training phase, all participants repeated the Pre-training protocol with different action and non-action texts and a new EEG session. We found that exergaming selectively reduced action-verb outcomes and frontoposterior functional connectivity in the motor-sensitive 10–20 Hz range, both patterns being positively correlated. Conversely, static videogame playing yielded no specific effect on any linguistic category and did not modulate functional connectivity. Together, these findings suggest that action-verb processing and key neural correlates can be focally influenced by full-body motor training in a highly ecological setting. Our study illuminates the role of situated experience and sensorimotor circuits in action-concept processing, addressing calls for naturalistic insights on language embodimentSabrina Cervetto acknowledges the support of Centro Interdisciplinario en CogniciĂłn para la Enseñanza y el Aprendizaje and Centro de InvestigaciĂłn BĂĄsica en PsicologĂ­a. LucĂ­a Amoruso is supported with funding from the European Commission (H2020-MSCA-IF-GF- 2020; Grant 101025814), Ikerbasque Foundation, and by the Spanish Ministry of Economy and Competitiveness through the Plan Nacional RTI2018- 096216-A-I00. Adolfo GarcĂ­a is an Atlantic Fellow at the Global Brain Health Institute (GBHI) and is supported with funding from GBHI, Alzheimer’s Association, and Alzheimer’s Society (Alzheimer’s Association GBHI ALZ UK-22-865742); ANID, FONDECYT Regular (1210176); and Programa Interdisciplinario de InvestigaciĂłn Experimental en ComunicaciĂłn y CogniciĂłn (PIIECC), Facultad de Humanidades, USACH

    Immunitary Gaming: Mapping the First-Person Shooter

    Get PDF
    Videogames have been theorised as an action-based medium. The original contribution to knowledge this thesis makes is to reconfigure this claim by considering popular multiplayer FPS games as reaction-based – particularly, immune reactions. I take up Roberto Esposito’s claim that the individual in contemporary biopolitics is defined negatively against the other, controlled and ultimately negated via their reactions to power’s capacity to incessantly generate threats. By inciting insecurity and self-protective gestures, FPS games like Activision’s Call of Duty franchise and EA’s Battlefield series vividly dramatise Esposito’s thought, producing an immunitary gaming. Immunitary Gaming locates the FPS within key moments of change as well as evolution in Western image systems including the emergence of linear perspective, cartography and the early years of the cinema. The FPS appropriates these image systems, but also alters their politics. Giorgio Agamben has argued that the apparatuses of late modernity no longer subjectify like their forebears, but desubjectify the individual, producing an impotent neoliberal body politic. I trace a similar development here. My work also seeks to capture the player’s movements via autoethnographic writing that communicates the viscerally and intensity of the experience. The FPS is framed as capable of giving insight into both the present and the future of our technological and political milieu and ‘sensorium,’ in Walter Benjamin’s terms. In its valorisation of the individual and production of insecurity to incite action, this project argues that the FPS is a symbolic form of immunitary neoliberal governmentality

    What you see is what you feel : on the simulation of touch in graphical user interfaces

    Get PDF
    This study introduces a novel method of simulating touch with merely visual means. Interactive animations are used to create an optical illusion that evokes haptic percepts like stickiness, stiffness and mass, within a standard graphical user interface. The technique, called optically simulated hapic feedback, exploits the domination of the visual over the haptic modality and the general human tendency to integrate between the various senses. The study began with an aspiration to increase the sensorial qualities of the graphical user interface. With the introduction of the graphical user interface – and in particular the desktop metaphor – computers have become accessible for almost anyone; all over the world, people from various cultures use the same icons, folders, buttons and trashcans. However, from a sensorial point of view this computing paradigm is still extremely limited. Touch can play a powerful role in communication. It can offer an immediacy and intimacy unparalleled by words or images. Although few doubt this intrinsic value of touch perception in everyday life, examples in modern technology where human-machine communication utilizes the tactile and kinesthetic senses as additional channels of information flow are scarce. Hence, it has often been suggested that improvements in the sensorial qualities of computers could lead to more natural interfaces. Various researchers have been creating scenarios and technologies that should enrich the sensorial qualities of our digital environment. Some have developed mechanical force feedback devices that enable people to experience haptics while interacting with a digital display. Others have suggested that the computer should ‘disappear’ into the environment and proposed tangible objects as a means to connect between the digital and the physical environment. While the scenarios of force feedback, tangible interactions and the disappearing computer are maturing, millions of people are still working with a desktop computer interface every day. In spite of its obvious drawbacks, the desktop computing model penetrated deeply into our society and cannot be expected to disappear overnight. Radically different computing paradigms will require the development of radically different hardware. This takes time and it is yet unsure when, if so, other computing paradigms will replace the current desktop computing setup. It is for that reason, that we pursued another approach towards physical computing. Inspired by renaissance painters, who already centuries ago invented illusionary techniques like perspective and trompe d’oeil to increase the presence of their paintings, we aim to improve the physicality of the graphical user interface, without resorting to special hardware. Optically simulated haptic feedback, described in this thesis, has a lot in common with mechanical force-feedback systems, except for the fact that in mechanical force-feedback systems the location of the cursor is manipulated as a result of the force sent to the haptic device (force-feedback mouse, trackball, etc), whereas in our system the cursor location is directly manipulated, resulting in an purely visual force feedback. By applying tiny displacements upon the cursor’s movement, tactile sensations like stickiness, touch, or mass can be simulated. In chapter 2 we suggest that the active cursor technique can be applied to create richer interactions without the need for special hardware. The cursor channel is transformed from an input only to an input/output channel. The active cursor displacements can be used to create various (dynamic) slopes as well as textures and material properties, which can provide the user with feedback while navigating the on-screen environment. In chapter 3 the perceptual illusion of touch, resulting from the domination of the visual over the haptic modality, is described in a larger context of prior research and experimentally tested. Using both the active cursor technique and a mechanical force feedback device, we generated bumps and hole structures. In a controlled experiment the perception of the slopes was measured, comparing between the optical and the mechanical simulation. Results show that people can recognize optically simulated bump and hole structures, and that active cursor displacements influence the haptic perception of bumps and holes. Depending on the simulated strength of the force, optically simulated haptic feedback can take precedence over mechanically simulated haptic feedback, but also the other way around. When optically simulated and mechanically simulated haptic feedback counteract each other, however, the weight attributed to each source of haptic information differs between users. It is concluded that active cursor displacements can be used to optically simulate the operation of mechanical force feedback devices. An obvious application of optically simulated haptic feedback in graphical user interfaces, is to assist the user in pointing at icons and objects on the screen. Given the pervasiveness of pointing in graphical interfaces, every small improvement in a target-acquisition task, represents a substantial improvement in usability. Can active cursor displacements be applied to help the user reach its goal? In chapter 4 we test the usability of optically simulated haptic feedback in a pointing task, again in comparison with the force feedback generated by a mechanical device. In a controlled Fitts’-law type experiment, subjects were asked to point and click at targets of different sizes and distances. Results learn that rendering hole type structures underneath the targets improves the effectiveness, efficiency and satisfaction of the target acquisition task. Optically simulated haptic feedback results in lower error rates, more satisfaction, and a higher index of performance, which can be attributed to the shorter movement times realized for the smaller targets. For larger targets, optically simulated haptic feedback resulted in comparable movement times as mechanically simulated haptic feedback. Since the current graphical interfaces are not designed with tactility in mind, the development of novel interaction styles should also be an important research path. Before optically simulated haptic feedback can be fully brought into play in more complex interaction styles, designers and researchers need to further experiment with the technique. In chapter 5 we describe a software prototyping toolkit, called PowerCursor, which enables designers to create interaction styles using optically simulated haptic feedback, without having to do elaborate programming. The software engine consists of a set of ready force field objects – holes, hills, ramps, rough and slick objects, walls, whirls, and more – that can be added to any Flash project, as well as force behaviours that can be added to custom made shapes and objects. These basic building blocks can be combined to create more complex and dynamic force objects. This setup should allow the users of the toolkit to creatively design their own interaction styles with optically simulated haptic feedback. The toolkit is implemented in Adobe Flash and can be downloaded at www.powercursor.com. Furthermore, in chapter 5 we present a preliminary framework of the expected applicability of optically simulated haptic feedback. Illustrated with examples that have been created with the beta-version of the PowerCursor toolkit so far, we discuss some of the ideas for novel interaction styles. Besides being useful in assisting the user while navigating, optically simulated haptic feedback might be applied to create so-called mixed initiative interfaces – one can for instance think of an installation wizard, which guides the cursor towards the recommended next step. Furthermore since optically simulated haptic feedback can be used to communicate material properties of textures or 3D objects, it can be applied to create aesthetically pleasing interactions – which with the migration of computers into other domains than the office environment are becoming more relevant. Finally we discuss the opportunities for applications outside the desktop computer model. We discuss how, in principle, optically simulated haptic feedback can play a role in any graphical interface where the input and output channels are decoupled. In chapter 6 we draw conclusions and discuss future directions. We conclude that optically simulated haptic feedback can increase the physicality and quality of our current graphical user interfaces, without resorting to specialistic hardware. Users are able to recognize haptic structures simulated by applying active cursor displacements upon the users mouse movements. Our technique of simulating haptic feedback optically opens up an additional communication channel with the user that can enhance the usability of the graphical interface. However, the active cursor technique is not to be expected to replace mechanical haptic feedback altogether, since it can be applied only in combination with a visual display and thus will not work for visually impaired people. Rather, we expect the ability to employ tactile interaction styles in a standard graphical user interface, could catalyze the development of novel physical interaction styles and on the long term might instigate the acceptance of haptic devices. With this research we hope to have contributed to a more sensorial and richer graphical user interface. Moreover we have aimed to increase our awareness and understanding of media technology and simulations in general. Therefore, our scientific research results are deliberately presented within a social-cultural context that reflects upon the dominance of the visual modality in our society and the ever-increasing role of media and simulations in people’s everyday lives

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Skyler and Bliss

    Get PDF
    Hong Kong remains the backdrop to the science fiction movies of my youth. The city reminds me of my former training in the financial sector. It is a city in which I could have succeeded in finance, but as far as art goes it is a young city, and I am a young artist. A frustration emerges; much like the mould, the artist also had to develop new skills by killing off his former desires and manipulating technology. My new series entitled HONG KONG surface project shows a new direction in my artistic research in which my technique becomes ever simpler, reducing the traces of pixelation until objects appear almost as they were found and photographed. Skyler and Bliss presents tectonic plates based on satellite images of the Arctic. Working in a hot and humid Hong Kong where mushrooms grow ferociously, a city artificially refrigerated by climate control, this series provides a conceptual image of a imaginary typographic map for survival. (Laurent Segretier

    Playing at a Distance

    Get PDF
    An essential exploration of video game aesthetic that decenters the human player and challenges what it means to play. Do we play video games or do video games play us? Is nonhuman play a mere paradox or the future of gaming? And what do video games have to do with quantum theory? In Playing at a Distance, Sonia Fizek engages with these and many more daunting questions, forging new ways to think and talk about games and play that decenter the human player and explore a variety of play formats and practices that require surprisingly little human action. Idling in clicker games, wandering in walking simulators, automating gameplay with bots, or simply watching games rather than playing them—Fizek shows how these seemingly marginal cases are central to understanding how we play in the digital age. Introducing the concept of distance, Fizek reorients our view of computer-mediated play. To “play at a distance,” she says, is to delegate the immediate action to the machine and to become participants in an algorithmic spectacle. Distance as a media aesthetic framework enables the reader to come to terms with the ambiguity and aesthetic diversity of play. Drawing on concepts from philosophy, media theory, and posthumanism, as well as cultural and film studies, Playing at a Distance invites a wider understanding of what digital games and gaming are in all their diverse experiences and forms. In challenging the common perception of video games as inherently interactive, the book contributes to our understanding of the computer's influence on practices of play—and prods us to think more broadly about what it means to play

    Beyond Narrative: Exploring Narrative Liminality and Its Cultural Work

    Get PDF
    This book calls for an investigation of the 'borderlands of narrativity' - the complex and culturally productive area where the symbolic form of narrative meets other symbolic logics, such as data(base), play, spectacle, or ritual. It opens up a conversation about the 'beyond' of narrative, about the myriad constellations in which narrativity interlaces with, rubs against, or morphs into the principles of other forms. To conceptualize these borderlands, the book introduces the notion of "narrative liminality," which the 16 articles utilize to engage literature, popular culture, digital technology, historical artifacts, and other kinds of texts from a time span of close to 200 years

    Beyond Narrative

    Get PDF
    This book calls for an investigation of the â€șborderlands of narrativityâ€č — the complex and culturally productive area where the symbolic form of narrative meets other symbolic logics, such as data(base), play, spectacle, or ritual. It opens up a conversation about the â€șbeyondâ€č of narrative, about the myriad constellations in which narrativity interlaces with, rubs against, or morphs into the principles of other forms. To conceptualize these borderlands, the book introduces the notion of »narrative liminality,« which the 16 articles utilize to engage literature, popular culture, digital technology, historical artifacts, and other kinds of texts from a time span of close to 200 years

    Novel haptic interface For viewing 3D images

    Get PDF
    In recent years there has been an explosion of devices and systems capable of displaying stereoscopic 3D images. While these systems provide an improved experience over traditional bidimensional displays they often fall short on user immersion. Usually these systems only improve depth perception by relying on the stereopsis phenomenon. We propose a system that improves the user experience and immersion by having a position dependent rendering of the scene and the ability to touch the scene. This system uses depth maps to represent the geometry of the scene. Depth maps can be easily obtained on the rendering process or can be derived from the binocular-stereo images by calculating their horizontal disparity. This geometry is then used as an input to be rendered in a 3D display, do the haptic rendering calculations and have a position depending render of the scene. The author presents two main contributions. First, since the haptic devices have a finite work space and limited resolution, we used what we call detail mapping algorithms. These algorithms compress geometry information contained in a depth map, by reducing the contrast among pixels, in such a way that it can be rendered into a limited resolution display medium without losing any detail. Second, the unique combination of a depth camera as a motion capturing system, a 3D display and haptic device to enhance user experience. While developing this system we put special attention on the cost and availability of the hardware. We decided to use only off-the-shelf, mass consumer oriented hardware so our experiments can be easily implemented and replicated. As an additional benefit the total cost of the hardware did not exceed the one thousand dollars mark making it affordable for many individuals and institutions
    • 

    corecore