30 research outputs found

    Touching is Good: An Eidetic Phenomenology of Interface, Interobjectivity, and Interaction in Nintendo\u27s Animal Crossing: Wild World

    Get PDF
    Situating video games and the meaningful practice of playing video games for future study by the discipline of communication, this eidetic phenomenology centers the focus of such inquiry at the site of the body. As video game studies have heretofore largely ignored or presupposed a bifurcation between player and video game, a phenomenology is likewise crucial to investigating the lived experience of video gaming as an embodied activity by theoretically eschewing such subject/object distinctions and methodologically generating genuinely new, heuristic spaces for thinking about this phenomenon. In particular, the existential phenomenology of Maurice Merleau-Ponty, which emphasizes the body as necessarily enworlded, offers an insightful conceptualization of the video game player’s intentional and meaningful endeavor. Merleau-Ponty’s latter work specifically details the intricacies of a body’s sense of touch, outlining three specific modalities: “a touching of the sleek and the rough,” a “touching of the things,” and “a veritable touching of the touch.” The notion of touch is also key in portraying the already-imbricated nature of player and video game. Using these modalities as frames for organizing experience, I enact performative playings of the video game Animal Crossing: Wild World by Nintendo. This study proceeds methodologically by way of the three-step phenomenological method outlined by Merleau-Ponty – one that necessarily entails a description, a reduction, and an interpretation. Performative playings generate descriptive data later thematized as capta in order to synthetically produce acta, or an interpretive orientation toward the data/capta relationship. Each of three phenomenological reflections respectively examines one of these modalities. The first reflection (upon “a touching of the sleek and the rough”) explores the ways in which the sensual touch of the player both intersects with a new material technology that facilitates game play (the Nintendo DS video game console) by way of a touch-sensitive interface, and “crisscrosses” with a player’s embodied sense of sight. Framed by the human-technology-world relations outlined by technoscience philosopher Don Ihde, descriptions of these intersections and crisscrosses yield interpretations of a corporeal schema with specific embodied preferences for action in various gamic spaces: a being-in-the-(game)world. The second reflection (upon “a touching of the things”) interrogates my interobjective relations with other enworlded body-objects. While I have a body that interacts with this technology, I also am a body – a material object grounded in the self-same flesh of the world. By way of Vivian Sobchack’s philosophy of interobjectivity, I recognize that I am a passionate video game player, and literally re- cognize my primordial, immanent and embodied abilities as both subjective object and objective subject to interpret my experiences being “touched” by the objects of the game world (whose inhabitance I detailed in the first reflection). The third reflection (upon “a veritable touching of the touch”) uses the first two as an experiential ground to explore the ways in which I and other players “keep in touch” by playing video games. My descriptions of these video gaming experiences indicate the presence of Roman Jakobson’s six elements and correlative functions integral to an understanding of human communication, specifically situating video games for study by the discipline of communication. Playing video games is an interactive practice that synthesizes the analog (both/and) logic of human player-subjects and the digital (either/or) logic of game-objects as they emerge from an undifferentiated, chiasmic interrelationship. Operating from a digital-analog logic allows players to convert contexts of choice into choices of context

    Machine learning for automatic analysis of affective behaviour

    Get PDF
    The automated analysis of affect has been gaining rapidly increasing attention by researchers over the past two decades, as it constitutes a fundamental step towards achieving next-generation computing technologies and integrating them into everyday life (e.g. via affect-aware, user-adaptive interfaces, medical imaging, health assessment, ambient intelligence etc.). The work presented in this thesis focuses on several fundamental problems manifesting in the course towards the achievement of reliable, accurate and robust affect sensing systems. In more detail, the motivation behind this work lies in recent developments in the field, namely (i) the creation of large, audiovisual databases for affect analysis in the so-called ''Big-Data`` era, along with (ii) the need to deploy systems under demanding, real-world conditions. These developments led to the requirement for the analysis of emotion expressions continuously in time, instead of merely processing static images, thus unveiling the wide range of temporal dynamics related to human behaviour to researchers. The latter entails another deviation from the traditional line of research in the field: instead of focusing on predicting posed, discrete basic emotions (happiness, surprise etc.), it became necessary to focus on spontaneous, naturalistic expressions captured under settings more proximal to real-world conditions, utilising more expressive emotion descriptions than a set of discrete labels. To this end, the main motivation of this thesis is to deal with challenges arising from the adoption of continuous dimensional emotion descriptions under naturalistic scenarios, considered to capture a much wider spectrum of expressive variability than basic emotions, and most importantly model emotional states which are commonly expressed by humans in their everyday life. In the first part of this thesis, we attempt to demystify the quite unexplored problem of predicting continuous emotional dimensions. This work is amongst the first to explore the problem of predicting emotion dimensions via multi-modal fusion, utilising facial expressions, auditory cues and shoulder gestures. A major contribution of the work presented in this thesis lies in proposing the utilisation of various relationships exhibited by emotion dimensions in order to improve the prediction accuracy of machine learning methods - an idea which has been taken on by other researchers in the field since. In order to experimentally evaluate this, we extend methods such as the Long Short-Term Memory Neural Networks (LSTM), the Relevance Vector Machine (RVM) and Canonical Correlation Analysis (CCA) in order to exploit output relationships in learning. As it is shown, this increases the accuracy of machine learning models applied to this task. The annotation of continuous dimensional emotions is a tedious task, highly prone to the influence of various types of noise. Performed real-time by several annotators (usually experts), the annotation process can be heavily biased by factors such as subjective interpretations of the emotional states observed, the inherent ambiguity of labels related to human behaviour, the varying reaction lags exhibited by each annotator as well as other factors such as input device noise and annotation errors. In effect, the annotations manifest a strong spatio-temporal annotator-specific bias. Failing to properly deal with annotation bias and noise leads to an inaccurate ground truth, and therefore to ill-generalisable machine learning models. This deems the proper fusion of multiple annotations, and the inference of a clean, corrected version of the ``ground truth'' as one of the most significant challenges in the area. A highly important contribution of this thesis lies in the introduction of Dynamic Probabilistic Canonical Correlation Analysis (DPCCA), a method aimed at fusing noisy continuous annotations. By adopting a private-shared space model, we isolate the individual characteristics that are annotator-specific and not shared, while most importantly we model the common, underlying annotation which is shared by annotators (i.e., the derived ground truth). By further learning temporal dynamics and incorporating a time-warping process, we are able to derive a clean version of the ground truth given multiple annotations, eliminating temporal discrepancies and other nuisances. The integration of the temporal alignment process within the proposed private-shared space model deems DPCCA suitable for the problem of temporally aligning human behaviour; that is, given temporally unsynchronised sequences (e.g., videos of two persons smiling), the goal is to generate the temporally synchronised sequences (e.g., the smile apex should co-occur in the videos). Temporal alignment is an important problem for many applications where multiple datasets need to be aligned in time. Furthermore, it is particularly suitable for the analysis of facial expressions, where the activation of facial muscles (Action Units) typically follows a set of predefined temporal phases. A highly challenging scenario is when the observations are perturbed by gross, non-Gaussian noise (e.g., occlusions), as is often the case when analysing data acquired under real-world conditions. To account for non-Gaussian noise, a robust variant of Canonical Correlation Analysis (RCCA) for robust fusion and temporal alignment is proposed. The model captures the shared, low-rank subspace of the observations, isolating the gross noise in a sparse noise term. RCCA is amongst the first robust variants of CCA proposed in literature, and as we show in related experiments outperforms other, state-of-the-art methods for related tasks such as the fusion of multiple modalities under gross noise. Beyond private-shared space models, Component Analysis (CA) is an integral component of most computer vision systems, particularly in terms of reducing the usually high-dimensional input spaces in a meaningful manner pertaining to the task-at-hand (e.g., prediction, clustering). A final, significant contribution of this thesis lies in proposing the first unifying framework for probabilistic component analysis. The proposed framework covers most well-known CA methods, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), providing further theoretical insights into the workings of CA. Moreover, the proposed framework is highly flexible, enabling novel CA methods to be generated by simply manipulating the connectivity of latent variables (i.e. the latent neighbourhood). As shown experimentally, methods derived via the proposed framework outperform other equivalents in several problems related to affect sensing and facial expression analysis, while providing advantages such as reduced complexity and explicit variance modelling.Open Acces

    Text world theory and the emotional experience of literary discourse.

    Get PDF
    This thesis investigates the emotional experience of literary discourse from a cognitivepoetic perspective. In doing so, it combines detailed Text World Theory analysis with an examination of naturalistic reader response data in the form of book group discussions and internet postings. Three novels by contemporary author Kazuo Ishiguro form the analytical focus of this investigation: The Remains of the Dt!)' (1989), The Unconso/ed (1995) and Never Let Me Go (2005), chosen due to their thematic engagement with emotion and their ability to evoke emotion in readers. The central aims of this thesis are to develop cognitive-poetic understanding of the emotional experience of literature, and to advance cognitive-poetic and literary-critical understanding of the works of Ishiguro. As a result of the analytical investigations of the three novels, this thesis proposes several enhancements to the discourse-world level of the Text World Theory framework. In particular, this thesis argues for a more detailed and nuanced account of deictic projection and identification, proposes a means of including readers' hopes and preferences in text-world analyses, and reconceptualises processes of knowledge activation as inherently emotional. Detailed, cognitive-poetic analyses of Ishiguro's novels elucidate literary-critical observations regarding Ishiguro's shifting style, and present new insights into the cognitive and emotional aspects of the interaction between the texts and their readers. This thesis aims primarily to be a contribution to the fields of stylistics and cognitive poetics. It approaches this theoretically through the application and enhancement of cognitive poetic frameworks, analytically through the investigation of Ishiguro, and methodologically through the utilisation of reader response data in order to direct and support the investigations. However, incidental contributions are also made to cognitive and social emotion theories, and the discussion raises several suggestions for continued interdisciplinary research in the future

    The role of time in video understanding

    Get PDF

    Deep Learning Techniques for Electroencephalography Analysis

    Get PDF
    In this thesis we design deep learning techniques for training deep neural networks on electroencephalography (EEG) data and in particular on two problems, namely EEG-based motor imagery decoding and EEG-based affect recognition, addressing challenges associated with them. Regarding the problem of motor imagery (MI) decoding, we first consider the various kinds of domain shifts in the EEG signals, caused by inter-individual differences (e.g. brain anatomy, personality and cognitive profile). These domain shifts render multi-subject training a challenging task and impede robust cross-subject generalization. We build a two-stage model ensemble architecture and propose two objectives to train it, combining the strengths of curriculum learning and collaborative training. Our subject-independent experiments on the large datasets of Physionet and OpenBMI, verify the effectiveness of our approach. Next, we explore the utilization of the spatial covariance of EEG signals through alignment techniques, with the goal of learning domain-invariant representations. We introduce a Riemannian framework that concurrently performs covariance-based signal alignment and data augmentation, while training a convolutional neural network (CNN) on EEG time-series. Experiments on the BCI IV-2a dataset show that our method performs superiorly over traditional alignment, by inducing regularization to the weights of the CNN. We also study the problem of EEG-based affect recognition, inspired by works suggesting that emotions can be expressed in relative terms, i.e. through ordinal comparisons between different affective state levels. We propose treating data samples in a pairwise manner to infer the ordinal relation between their corresponding affective state labels, as an auxiliary training objective. We incorporate our objective in a deep network architecture which we jointly train on the tasks of sample-wise classification and pairwise ordinal ranking. We evaluate our method on the affective datasets of DEAP and SEED and obtain performance improvements over deep networks trained without the additional ranking objective

    On Gillian Rose’s critical project, and how it can be read constructively in conjunction with Michel Foucault’s method of genealogical problematisation

    Get PDF
    In this thesis I will argue that the critical project of Gillian Rose can be read constructively in conjunction with Michel Foucault’s method of genealogical problematisation. Commentators have tended to present Rose’s critical project as entailing a general challenge to the critical projects of “postmodernity”. This way of presenting Rose’s critical project, while not strictly unfounded, has raised, and continues to raise, a number of unfortunate and unnecessary borders between Rose's thought and that of many of her contemporaries. In contrast to the way commentators have tended to present Rose’s critical project, I will present it as entailing, not a general challenge to the critical projects of “postmodernity”, but a specific challenge to Foucault’s method of genealogy. By approaching Rose’s critical project in this specific way, I aim to afford an alternative reading of it – that is, a reading in which Rose’s critical project can be, in part, clarified and supported by Foucault’s method of genealogical problematisation. My hope is that by affording this alternative reading I will open Rose’s critical project up to influence, and be influenced by, number of contemporary debates surrounding the practice of criticism. Specifically, the debates surrounding the relationship between criticism and normativity, debates in which Foucault’s method of genealogy continues to play a vital part

    Rethinking Schooling

    Get PDF
    Taking a collection of seminal articles from the Journal of Curriculum Studies, this book offers readers a vantage point for thinking about the worlds of schools and curricula, focusing in particular on the concept of seeing schools, curricula and teaching in new ways. Each of the chapters sheds fresh light on the ways of thinking the aforementioned. Themes include: classrooms and teaching pedagogy science and history education school and curriculum development students’ lives in schools. Written by an international group of distinguished scholars from Britain, North America, Sweden and Germany, the chapters draw on the perspectives offered by curriculum and pedagogical theory, history, ethnography, sociology, psychology and organisational studies and experiences in curriculum-making. Together they invite many questions about why teaching and curricula must be as they are. Rethinking Schooling provides new futures for education and alternative ways of seeing them

    Karakteritega suhestumine ja digikogukondlik praktika: “Halvale teele” multidistsiplinaarne uurimus

    Get PDF
    Kuidas suhestab vaataja end narratiivse karakteriga? Kirjandusteadus, folkloristika, narratoloogia, psĂŒhholoogia, ja meediauuringud laiemaltki on antud seoses rakendanud kĂ€sitlusi samastumisest, empatiseerimisest ja (vĂ€ljamĂ”eldud maailma) sukeldumisest. Kuna vĂ€ljamĂ”eldud inimesed, kas kirja- vĂ”i pildisĂ”nas, polevat rohkemat autori(te) ettekujutuse viljast, siis pĂŒĂŒdvat vaataja/lugejagi neis kas Ă€ra tunda iseend vĂ”i siis teisalduda jĂ€lgitavasse situatsiooni, end vaadeldava tegelaskujuga samastades. Kui karakteriga suhestumise analĂŒĂŒsis lĂ€htuda aga igapĂ€evasuhtlustasandist, nĂ€ib olevat ebapiisav eelistada ĂŒksnes taaskujutavat ja sissepoole pööratud vaadet inimteadvusele, kuna ideedel inimesest kui „sotsiaalsest olevusest“ ja tĂ€henduse „sotsiaalsest karakterist“ (VĂ”gotskii) on oht jÀÀda tahaplaanile ja „karakteriga suhestumine“ taandub omamoodi sisedialoogiks: vaataja mĂ”istab teist inimest iseenda pĂ”hjal kohandatud mudeli kaudu. Ehkki teatud olukordades on see vajalik mĂ”istmis- ja tĂ€hendustamisstrateegia, ei saa eirata, et suhestumine kui selline on ometi suunatud. Haaratud ollakse kellestki, „sukeldutakse“ maailma, mida keegi omas-maailmas-iseseisvalt-eksisteerivana asustab. KĂ€esoleva doktorivĂ€itekirja avapeatĂŒkk sĂŒnteesib vĂ€ga mitmekesist teaduskirjandust ja töötab vĂ€lja kohase teoreetilise mĂ”isteraamistiku, mis kĂ€sitleb suhestumist teleseriaali tegelaskujudega argiteadvuslikust sotsiaalsest kogemusest lĂ€htuvalt. Keskne on siin kĂ€sitlus pĂ€risustamisest: kogukondliku arutelu kontekstis lihtsalt lĂ€henetaksegi tegelaskujudele kui pĂ€ris inimestele. Narratiivsest tegelaskujust saab persoon. Internetiarutelud opereerivad kolmandas isikus (mina/tema) ja narratiiv-ajalooliselt (vĂ€ljamĂ”eldud sĂŒĆŸee kui narratiivse persooni elu lugu). Individuaalsed kommentaaritekstid on siinjuures mĂ”istetavad kui narrtseptid, s.o. narratiivse teis(t)e-taju tĂ€hendusloomelised saadused. Narrtseptid toovad esile narratiivsetele persoonidele suunatud tĂ”lgenduste kogukondlikult jagatud, mitmehÀÀlset mÔÔdet. VĂ€itekirja teine peatĂŒkk tutvustatab originaalmĂ”istet majakas. Majakal on internetikommunikatsiooni analĂŒĂŒsis struktureeriv roll. “Heites valgust“ kommunikatsiooni erinevatele kĂŒlgedele, majakas kas liigendab narrtseptiivseid looilmu (retseptiivne, „populaarne“ dimensioon) vĂ”i komponeerib neid reaalajaliselt avalduvateks ’lahtiste otstega’ diskursuseilmadeks (analĂŒĂŒtilis-metodoloogiline, „akadeemiline“ dimensioon). Siinkohal pakubki antud peatĂŒkk ĂŒhtlasi ka kriitilisi kohandusi juba kĂ€ibelolevatele mĂ”istetele vĂ”imalike maailmade ja narratiiviteooriast ning diskursuseanalĂŒĂŒsist, s.o., looilm, kavatsemis-ilm, tekstiloojate-ilm ja diskursuse-ilm. DoktorivĂ€itekirja kolmas peatĂŒkk keskendub ameerika teleseriaali “Halvale teele“ (Breaking Bad) internetikommentaaride illustratiivsele lĂ€hianalĂŒĂŒsile. Eelkirjeldatud teoreetilisel mĂ”isteaparatuuril on analĂŒĂŒsis keskne roll.How does the viewer engage herself with narrative characters? Literary theory, folkloristics, narratology, psychology, and media studies more widely have generally found guidance in the treatments of identification, empathy, and immersion. For if fictional people, be it by the written or audiovisual depiction, are nothing but the fruits of authors’ imagination, then the viewer or the reader similarly attempts to recognize herself in, immerse herself into the world of, and identify herself with the observed character. While analyzing character engagement from the perspective of everyday communication, however, the exclusively representational and inward approach to the human mind can prove an inadequate preference. For the respective ideas of human as a „social being“ and the „social character of meaning“ (Vygotsky) hazard to become backgrounded due to „character engagement“ becoming proportionate to a kind of inner dialogue: viewer makes sense of the other through the model she has adapted about herself. Although it can be, at times, a necessary strategy for comprehension and sense-making, one cannot disregard the engagement being—even toward “fictional” people—always directed. You are absorbed by someone, you become “immersed” in a world that is inhabitated by someone autonomously-existing-with-their-world. In synthesizing diverse literature, Chapter One of the present dissertation develops appropriate theoretical framework that approaches engagement with the characters of television serials from the stance of common sense social experience. Of focal significance here is the notion of realitization: in the context of communal discussion, characters just are approached as if real people. Accordingly, a narrative character becomes a person. For Internet discussions operate in third person (me/her) and historio-narratively (“fictive” plot as narrative person’s life story). Individual commentary texts become hereby considered as narrcepts; that is, they are the sense-making products of the narrative perception of the other (narrative+precept). Through narrcepts, the distributed and polyphonic dimension of the construals directed toward narrative persons becomes acknowledged. Chapter Two of the dissertation introduces a succeeding original notion: beacon. Beacon enacts a structuring role in the analysis of online communication. Beacon „throws light“ onto the different aspects of Internet discourse, decomposing narrceptive storyworlds (reciprocal, „popular“ dimension) and composing the latter into „open ended“ discourse worlds emerging in real time (analytical-methodological, „academical“ dimension). Consequently, critical adaptations of current terminology in possible worlds theory, narrative theory, and discourse analysis are presented; that is, storyworld, intend-world, text makers’ world and discourse world. Chapter three of the dissertation concentrates on the illustrative close analysis of the online commentaries with regard to the American television serial Breaking Bad. The previously established theoretical apparatus serves a central role in the analysis

    Perceptually-based language to simplify sketch recognition user interface development

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 473-495).Diagrammatic sketching is a natural modality of human-computer interaction that can be used for a variety of tasks, for example, conceptual design. Sketch recognition systems are currently being developed for many domains. However, they require signal-processing expertise if they are to handle the intricacies of each domain, and they are time-consuming to build. Our goal is to enable user interface designers and domain experts who may not have expertise in sketch recognition to be able to build these sketch systems. We created and implemented a new framework (FLUID - f acilitating user interface development) in which developers can specify a domain description indicating how domain shapes are to be recognized, displayed, and edited. This description is then automatically transformed into a sketch recognition user interface for that domain. LADDER, a language using a perceptual vocabulary based on Gestalt principles, was developed to describe how to recognize, display, and edit domain shapes. A translator and a customizable recognition system (GUILD - a generator of user interfaces using ladder descriptions) are combined with a domain description to automatically create a domain specific recognition system.(cont.) With this new technology, by writing a domain description, developers are able to create a new sketch interface for a domain, greatly reducing the time and expertise for the task Continuing in pursuit of our goal to facilitate UI development, we noted that 1) human generated descriptions contained syntactic and conceptual errors, and that 2) it is more natural for a user to specify a shape by drawing it than by editing text. However, computer generated descriptions from a single drawn example are also flawed, as one cannot express all allowable variations in a single example. In response, we created a modification of the traditional model of active learning in which the system selectively generates its own near-miss examples and uses the human teacher as a source of labels. System generated near-misses offer a number of advantages. Human generated examples are tedious to create and may not expose problems in the current concept. It seems most effective for the near-miss examples to be generated by whichever learning participant (teacher or student) knows better where the deficiencies lie; this will allow the concepts to be more quickly and effectively refined.(cont.) When working in a closed domain such as this one, the computer learner knows exactly which conceptual uncertainties remain, and which hypotheses need to be tested and confirmed. The system uses these labeled examples to automatically build a LADDER shape description, using a modification of the version spaces algorithm that handles interrelated constraints, and which also has the ability to learn negative and disjunctive constraints.by Tracy Anne Hammond.Ph.D
    corecore