367,180 research outputs found

    Creating new stories for praxis: navigations, narrations, neonarratives

    Get PDF
    This paper considers differing understandings about the role and praxis of studio-based research in the visual arts. This is my attempt to unpack this nexus and place it in a context of credibility for our field. Jill Kinnear (2000) makes the point that visual research deals with and intensifies elements of research and language that have always been part of the practice of an artist. Presented is a way to conceptualise and explain what we can do as researchers in the visual arts. I am recontextualizing notions of research, looking at the resemblances, the self-resemblances and the differences between traditional and visual research methods as a logic of necessity. I am investigating how we can decode and recode what we do in the language of appropriation and bricolage. In mapping the processes and territories, I am interested in the use of autobiography as a way to incorporate a deep sense of the intricate relationships of the meaning and actions of artistic practice and its embeddedness in cultural influences, personal experience and aspirations (Hawke 1996:35). This is a study that explores possible parameters for visual research, questioning in what sense is it the best way to understand our relationship with traditional research fields

    An international study of young people's drawings of what is inside themselves

    Get PDF
    What do young people know of what is inside them and how does this knowledge depend on their culture? In this study a cross-sectional approach was used involving a total of 586 pupils from 11 different countries. Young people, aged either seven years or 15 years, were given a blank piece of A4-sized paper and asked to draw what they thought was inside themselves. The resultant drawings were analysed using a seven point scale where the criterion was anatomical accuracy. However, we also tentatively suggest other ways in which such drawings may be analysed, drawing on approaches used in the disciplines of visual design and visual culture

    What-if analysis: A visual analytics approach to Information Retrieval evaluation

    Get PDF
    This paper focuses on the innovative visual analytics approach realized by the Visual Analytics Tool for Experimental Evaluation (VATE2) system, which eases and makes more effective the experimental evaluation process by introducing the what-if analysis. The what-if analysis is aimed at estimating the possible effects of a modification to an Information Retrieval (IR) system, in order to select the most promising fixes before implementing them, thus saving a considerable amount of effort. VATE2 builds on an analytical framework which models the behavior of the systems in order to make estimations, and integrates this analytical framework into a visual part which, via proper interaction and animations, receives input and provides feedback to the user. We conducted an experimental evaluation to assess the numerical performances of the analytical model and a validation of the visual analytics prototype with domain experts. Both the numerical evaluation and the user validation have shown that VATE2 is effective, innovative, and useful

    How visual cues to speech rate influence speech perception

    No full text
    Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear

    Building a Multimodal, Trust-Based E-Voting System

    Get PDF
    This paper addresses the issue of voter identification and authentication, voter participation and trust in the electoral system. A multimodal/hybrid identification and authentication scheme is proposed which captures what a voter knows – PIN, what he has – smartcard and what he is – biometrics. Massive participation of voters in and out of the country of origin was enhanced through an integrated channel (kiosk and internet voting). A multi-trust voting system is built based on service oriented architecture. Microsoft Visual C#.Net, ASP.Net and Microsoft SQL Server 2005 Express Edition components of Microsoft Visual Studio 2008 was used to realize the Windows and Web-based solutions for the electronic voting system

    Mashing up Visual Languages and Web Mash-ups

    Get PDF
    Research on web mashups and visual languages share an interest in human-centered computing. Both research communities are concerned with supporting programming by everyday, technically inexpert users. Visual programming environments have been a focus for both communities, and we believe that there is much to be gained by further discussion between these research communities. In this paper we explore some connections between web mashups and visual languages, and try to identify what each might be able to learn from the other. Our goal is to establish a framework for a dialog between the communities, and to promote the exchange of ideas and our respective understandings of humancentered computing.published or submitted for publicationis peer reviewe

    Comparing the E-Z Reader Model to Other Models of Eye Movement Control in Reading

    Get PDF
    The E-Z Reader model provides a theoretical framework for understanding how word identification, visual processing, attention, and oculomotor control jointly determine when and where the eyes move during reading. Thus, in contrast to other reading models reviewed in this article, E-Z Reader can simultaneously account for many of the known effects of linguistic, visual, and oculomotor factors on eye movement control during reading. Furthermore, the core principles of the model have been generalized to other task domains (e.g., equation solving, visual search), and are broadly consistent with what is known about the architecture of the neural systems that support reading

    Perceiving pictures

    Get PDF
    I aim to give a new account of picture perception: of the way our visual system functions when we see something in a picture. My argument relies on the functional distinction between the ventral and dorsal visual subsystems. I propose that it is constitutive of picture perception that our ventral subsystem attributes properties to the depicted scene, whereas our dorsal subsystem attributes properties to the picture surface. This duality elucidates Richard Wollheim’s concept of the “twofoldness” of our experience of pictures: the “visual awareness not only of what is represented but also of the surface qualities of the representation.” I argue for the following four claims: (a) the depicted scene is represented by ventral perception, (b) the depicted scene is not represented by dorsal perception, (c) the picture surface is represented by dorsal perception, and (d) the picture surface is not necessarily represented by ventral perceptio
    • 

    corecore