545 research outputs found

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    The Impostor: Exploring narrative game design for learning Korean as a foreign language

    Get PDF
    In recent years, digital language learning games and applications have proliferated. However, most existing apps employ methods and theoretical approaches that are not designed to teach learners practical language competence. Additionally, commercial apps tend to focus on languages with large markets, leaving smaller languages like Korean unsupported. The objective of this thesis is to explore language learning and second language acquisition (SLA) theories and their practical applications to find teaching methods that are best suited for improving practical language competence of Korean. Having identified such methods grounded in socio-cultural and ecological SLA theory, the thesis further integrates the teaching methods into a conceptual design of a digital language learning game for learning Korean as a foreign language. This thesis demonstrates that a grounding the fundamentally messy digital language learning game design process in SLA theory is not only viable but a good starting point. Key findings indicate that the designers need to identify the targeted learning objectives, learning experiences and game experiences as clear design goals early on, to efficiently guide the inherently messy design process. Furthermore, the thesis highlights that digital language learning game designers need to develop and nurture knowledge both in the target language instructional domain and in game design.Viime vuosina digitaalisten kielten oppimista varten luotujen pelien ja applikaatioiden määrä on lisääntynyt voimakkaasti. Valtaosa olemassa olevista applikaatioista soveltaa kuitenkin käytäntöjä ja teoreettisia lähestymistapoja, jotka eivät opeta käytännön kielitaitoja. Lisäksi kaupalliset applikaatiot keskittyvät lähinnä kieliin, joilla on suuret markkinat ja eivätkä tue pienempiä kieliä kuten Koreaa. Tämän opinnäytetyön päämääränä on tutkia kielten oppimisen ja vieraan kielen omaksumisen teoriaa sekä niiden käytännön sovelluksia ja löytää opetusmenetelmiä, jotka soveltuvat parhaiten käytännöllisen Korean kielen taidon opiskeluun. Työn tuloksena löytyi sosiokulttuurilliseen ja ekologiseen kielten omaksumisteoriaan pohjautuvia menetelmiä, jotka integroitiin osaksi opinnäytetyön osana suunniteltua oppimispelikonseptia. Tämä opinnäytetyö havainnollistaa, että pohjimmiltaan sekavan digitaalisten kieltenopiskelupelien suunnitteluprosessin pohjaaminen kieltenoppimisteoriaan on paitsi mahdollista myös erinomainen lähtökohta suunnittelutyölle. Työn päälöydökset osoittavat, että suunnittelijoiden tulee tunnistaa tavoitellut oppimistavoitteet, oppimiskokemukset ja pelikokemukset ajoissa, jotta suunnittelutyö etenisi tehokkaammin. Lisäksi tämä opinnäytetyö korostaa, että digitaalisten kieltenopiskelupelien suunnittelijoiden tulee perehtyä syvällisesti sekä opiskelun kohteena olevaan kieleen että pelisuunnitteluun

    Supporting Scholarly Research Ideation through Web Semantics

    Get PDF
    We develop new methods and technologies for supporting scholarly research ideation, the tasks in which researchers develop new ideas for their work, through web semantics, computational representations of information found on the web, capturing meaning involving people’s experiences of things of interest. To do so, we first conducted a qualitative study with established researchers on their practices, using sensitizing concepts from information science, creative cognition, and art as a basis for framing and deriving findings. We found that participants engage in and combine a wide range of activities, including citation chaining, exploratory browsing, and curation, to achieve their goals of creative ideation. We derived a new, interdisciplinary model to depict their practices. Our study and findings address a gap in existing research: the creative nature of what researchers do has been insufficiently investigated. The model is expected to guide future investigations. We then use in-context presentations of dynamically extracted semantic information to (1) address the issues of digression and disorientation, which arise in citation chaining and exploratory browsing, and (2) provide contextual information in researchers’ prior work curation. The implemented interface, Metadata In-Context Explorer (MICE), maintains context while allowing new information to be brought into and integrated with the current context, reducing the needs for switching between documents and webpages. Study shows that MICE supports participants in their citation chaining processes, thus supports scholarly research ideation. MICE is implemented with BigSemantics, a metadata type system and runtime integrating data models, extraction rules, and presentation hints into types. BigSemantics operationalizes type-specific, dynamic extraction and rich presentation of semantic information (a.k.a. metadata) found on the web. The metadata type system, runtime, and MICE are expected to help build interfaces supporting dynamic exploratory search, browsing, and other creative tasks involving complex and interlinked semantics

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Co-located Collaborative Information-based Ideation through Embodied Cross-Surface Curation

    Get PDF
    We develop an embodied cross-surface curation environment to support co-located, collaborative information-based ideation. Information-based ideation (IBI) refers to tasks and activities in which people generate and develop significant new ideas while working with information. Curation is the process of gathering and assembling objects in order to express ideas. The linear media and separated screens of prior curation environments constrain expression. This research utilizes information composition of rich bookmarks as the medium of curation. Visual representation of elements and ability to combine them in a freeform, spatial manner mimics how objects appear and can be manipulated in the physical world. Metadata of rich bookmarks leverages capabilities of the WWW. We equip participants with personal IBI environments, each on a mobile device, as a base for contributing to curation on a larger, collaborative surface. We hypothesize that physical representations for the elements and assemblage of curation, layered with physical techniques of interaction, will facilitate co-located IBI. We hypothesize that consistent physical and spatial representations of information and means for manipulating rich bookmarks on and across personal and collaborative surfaces will support IBI. We hypothesize that the small size and weight of personal devices will facilitate participants shifting their attention from their own work to each other and collaboration. We evaluated the curation environment by inviting couples to participate in a home makeover design task in a living-room lab. We demonstrated that our embodied cross-surface curation environment supports creative thinking, facilitates communication, and stimulates engagement and creativity in collaborative IBI

    Grounded Visual Analytics: A New Approach to Discovering Phenomena in Data at Scale

    Get PDF
    We introduce Grounded Visual Analytics, a new method that integrates qualitative and quantitative approaches in order to help investigators discover patterns about human activity. Investigators who develop or study systems often use log data, which keeps track of interactions their participants perform. Discovering and characterizing patterns in this data is important because it can help guide interactive computing system design. This new approach integrates Visual Analytics, a field that investigates Information Visualization and interactive machine learning, and Grounded Theory, a rigorous qualitative research method for discovering nuanced understanding of qualitative data. This dissertation defines and motivates this new approach, reviews relevant existing tools, builds the Log Timelines system. We present and analyze six case studies that use Log Timelines, a probe that we created in order explore Grounded Visual Analytics. In a series of case studies, we collaborate with a participant-investigator on their own project and data. Their use of Grounded Visual Analytics generates ideas about how future research can bridge the gap between qualitative and quantitative methods

    Parametric BIM-based Design Review

    Get PDF
    This research addressed the need for a new design review technology and method to express the tangible and intangible qualities of architectural experience of parametric BIM-based design projects. The research produced an innovative presentation tool by which parametric design is presented systematically. Focus groups provided assessments of the tool to reveal the usefulness of a parametric BIM-based design review method. The way in which we visualize architecture affects the way we design and perceive architectural form and performance. Contemporary architectural forms and systems are very complex, yet most architects who use Building Information Modeling (BIM) and generative design methods still embrace the two-dimensional 15th-century Albertian representational methods to express and review design projects. However, architecture cannot be fully perceived through a set of drawings that mediate our perception and evaluation of the built environment. The systematic and conventional approach of traditional architectural representation, in paper-based and slide-based design reviews, is not able to visualize phenomenal experience nor the inherent variation and versioning of parametric models. Pre-recorded walk-throughs with high quality rendering and imaging have been in use for decades, but high verisimilitude interactive walk-throughs are not commonly used in architectural presentations. The new generations of parametric and BIM systems allow for the quick production of variations in design by varying design parameters and their relationships. However, there is a lack of tools capable of conducting design reviews that engage the advantages of parametric and BIM design projects. Given the multitude of possibilities of in-game interface design, game-engines provide an opportunity for the creation of an interactive, parametric, and performance-oriented experience of architectural projects with multi-design options. This research has produced a concept for a dynamic presentation and review tool and method intended to meet the needs of parametric design, performance-based evaluation, and optimization of multi-objective design options. The concept is illustrated and tested using a prototype (Parametric Design Review, or PDR) based upon an interactive gaming environment equipped with a novel user interface that simultaneously engages the parametric framework, object parameters, multi-objective optimized design options and their performances with diagrammatic, perspectival, and orthographic representations. The prototype was presented to representative users in multiple focus group sessions. Focus group discussion data reveal that the proposed PDR interface was perceived to be useful if used for design reviews in both academic and professional practice settings

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben
    corecore