489 research outputs found

    UNBODY: A Poetry Escape Room in Augmented Reality

    Get PDF
    The integration of augmented reality (AR) technology into personal computing is happening fast, and augmented workplaces for professionals in areas such as Industry 4.0 or digital health can reasonably be expected to form liminal zones that push the boundary of what currently possible. The application potential in the creative industries, however, is vast and can target broad audiences, so with UNBODY, we set out to push boundaries of a different kind and depart from the graphic-centric worlds of AR to explore textual and aural dimensions of an extended reality, in which words haunt and re-create our physical selves. UNBODY is an AR installation for smart glasses that embeds poetry in the user’s surroundings. The augmented experience turns reality into a medium where holographic texts and film clips spill from dayglow billboards and totems. In this paper, we develop a blueprint for an AR escape room dedicated to the spoken and written word, with its open source code facilitating uptake by others into existing or new AR escape rooms. We outline the user-centered process of designing, building, and evaluating UNBODY. More specifically, we deployed a system usability scale (SUS) and a spatial interaction evaluation (SPINE) in order to validate its wider applicability. In this paper, we also describe the composition and concept of the experience, identifying several components (trigger posters, posters with video overlay, word dropper totem, floating object gallery, and a user trail visualization) as part of our first version before evaluation. UNBODY provides a sense of situational awareness and immersivity from inside an escape room. The recorded average mean for the SUS was 59.7, slightly under the recommended 68 average but still above ‘OK’ in the zone of low marginal acceptable. The findings for the SPINE were moderately positive, with the highest scores for output modalities and navigation support. This indicated that the proposed components and escape room concept work. Based on these results, we improved the experience, adding, among others, an interactive word composer component. We conclude that a poetry escape room is possible, outline our co-creation process, and deliver an open source technical framework as a blueprint for adding enhanced support for the spoken and written word to existing or coming AR escape room experiences. In an outlook, we discuss additional insight on timing, alignment, and the right level of personalization

    Toward Building A Social Robot With An Emotion-based Internal Control

    Get PDF
    In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry\u27s users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry\u27s interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator

    Comparison of interaction modalities for mobile indoor robot guidance : direct physical interaction, person following, and pointing control

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThree advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3-D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing toward a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users’ performance, while the NASA-TLX questionnaire was used to evaluate the users’ workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F(2,61)=84.874, p<0.001), accuracy (F(2,29)=4.937, p=0.016), and workload (F(2,68)=11.948, p<0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction mod- lity was systematically better than the pointing-control one: The participants completed the tasks faster with less workloadPeer ReviewedPostprint (author's final draft

    The Praxis and Politics of Building Urban Dashboards. Programmable City Working Paper 11

    Get PDF
    This paper critically reflects on the building of the Dublin Dashboard -- a website that provides citizens, planners, policy makers and companies with an extensive set of data and interactive visualizations about Dublin City, including real-time information -- from the perspective of critical data studies. The analysis draws upon participant observation, ethnography, and an archive of correspondence, to unpack the building of the Dashboard and the emergent politics of data and design. Our findings reveal four main observations. First, a dashboard is a complex socio-technical assemblage of actors and actants that work materially and discursively within a set of social and economic constraints, existing technologies and systems, and power geometries to assemble, produce and maintain the website. Second, the production and maintenance of a dashboard unfolds contextually, contingently and relationally through transduction. Third, the praxis and politics of creating a dashboard has wider recursive effects: just as building the dashboard was shaped by the wider institutional landscape, producing the system inflected that landscape. Fourth, the data, configuration, tools, and modes of presentation of a dashboard produce a particularised set of spatial knowledges about the city. We conclude that rather than frame dashboard development in purely technical terms, it is important to openly recognize their contested and negotiated politics and praxis

    Recommended practices for computerized clinical decision support and knowledge management in community settings: a qualitative study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The purpose of this study was to identify recommended practices for computerized clinical decision support (CDS) development and implementation and for knowledge management (KM) processes in ambulatory clinics and community hospitals using commercial or locally developed systems in the U.S.</p> <p>Methods</p> <p>Guided by the Multiple Perspectives Framework, the authors conducted ethnographic field studies at two community hospitals and five ambulatory clinic organizations across the U.S. Using a Rapid Assessment Process, a multidisciplinary research team: gathered preliminary assessment data; conducted on-site interviews, observations, and field surveys; analyzed data using both template and grounded methods; and developed universal themes. A panel of experts produced recommended practices.</p> <p>Results</p> <p>The team identified ten themes related to CDS and KM. These include: 1) workflow; 2) knowledge management; 3) data as a foundation for CDS; 4) user computer interaction; 5) measurement and metrics; 6) governance; 7) translation for collaboration; 8) the meaning of CDS; 9) roles of special, essential people; and 10) communication, training, and support. Experts developed recommendations about each theme. The original Multiple Perspectives framework was modified to make explicit a new theoretical construct, that of Translational Interaction.</p> <p>Conclusions</p> <p>These ten themes represent areas that need attention if a clinic or community hospital plans to implement and successfully utilize CDS. In addition, they have implications for workforce education, research, and national-level policy development. The Translational Interaction construct could guide future applied informatics research endeavors.</p

    Volume CXVI, Number 7, November 5, 1998

    Get PDF

    Redbird Impact, Volume 4, Number 1

    Get PDF
    https://ir.library.illinoisstate.edu/ri/1006/thumbnail.jp

    UNBODY: A Poetry Escape Room in Augmented Reality

    Get PDF
    The integration of augmented reality (AR) technology into personal computing is happening fast, and augmented workplaces for professionals in areas such as Industry 4.0 or digital health can reasonably be expected to form liminal zones that push the boundary of what currently possible. The application potential in the creative industries, however, is vast and can target broad audiences, so with UNBODY, we set out to push boundaries of a different kind and depart from the graphic-centric worlds of AR to explore textual and aural dimensions of an extended reality, in which words haunt and re-create our physical selves. UNBODY is an AR installation for smart glasses that embeds poetry in the user’s surroundings. The augmented experience turns reality into a medium where holographic texts and film clips spill from dayglow billboards and totems. In this paper, we develop a blueprint for an AR escape room dedicated to the spoken and written word, with its open source code facilitating uptake by others into existing or new AR escape rooms. We outline the user-centered process of designing, building, and evaluating UNBODY. More specifically, we deployed a system usability scale (SUS) and a spatial interaction evaluation (SPINE) in order to validate its wider applicability. In this paper, we also describe the composition and concept of the experience, identifying several components (trigger posters, posters with video overlay, word dropper totem, floating object gallery, and a user trail visualization) as part of our first version before evaluation. UNBODY provides a sense of situational awareness and immersivity from inside an escape room. The recorded average mean for the SUS was 59.7, slightly under the recommended 68 average but still above ‘OK’ in the zone of low marginal acceptable. The findings for the SPINE were moderately positive, with the highest scores for output modalities and navigation support. This indicated that the proposed components and escape room concept work. Based on these results, we improved the experience, adding, among others, an interactive word composer component. We conclude that a poetry escape room is possible, outline our co-creation process, and deliver an open source technical framework as a blueprint for adding enhanced support for the spoken and written word to existing or coming AR escape room experiences. In an outlook, we discuss additional insight on timing, alignment, and the right level of personalization

    Microworld Writing: Making Spaces for Collaboration, Construction, Creativity, and Community in the Composition Classroom

    Get PDF
    In order to create a 21st century pedagogy of learning experiences that inspire the engaged, constructive, dynamic, and empowering modes of work we see in online creative communities, we need to focus on the platforms, the environments, the microworlds that host, hold, and constitute the work. A good platform can build connections between users, allowing for the creation of a community, giving creative work an engaged and active audience. These platforms will work together to build networks of rhetorical/creative possibilities, wherein students can learn to cultivate their voices, skills, and knowledge bases as they engage across platforms and genres. I call on others to make, mod, or hack other new platforms. In applying this argument to my subject, teaching writing in a college composition class, I describe Microworld Writing as a genre that combines literary language practice with creativity, performativity, play, game mechanics, and coding. The MOO can be an example of one of these platforms and of microworld writing, in that it allows for creativity, user agency, and programmability, if it can be updated to have the needed features (virtual world, community, accessibility, narrativity, compatibility and exportability). I offer the concept of this MOO-IF as inspiration for a collaborative, community-oriented Interactive Fiction platform, and encourage people to extend, find, and build their own platforms. Until then and in addition, students can be brought into Microworld Writing in the composition classroom through interactive-fiction platforms, as part of an ecology of genre experimentation and platform exercise
    corecore