545 research outputs found
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
The Impostor: Exploring narrative game design for learning Korean as a foreign language
In recent years, digital language learning games and applications have proliferated. However, most existing apps employ methods and theoretical approaches that are not designed to teach learners practical language competence. Additionally, commercial apps tend to focus on languages with large markets, leaving smaller languages like Korean unsupported.
The objective of this thesis is to explore language learning and second language acquisition (SLA) theories and their practical applications to find teaching methods that are best suited for improving practical language competence of Korean. Having identified such methods grounded in socio-cultural and ecological SLA theory, the thesis further integrates the teaching methods into a conceptual design of a digital language learning game for learning Korean as a foreign language.
This thesis demonstrates that a grounding the fundamentally messy digital language learning game design process in SLA theory is not only viable but a good starting point. Key findings indicate that the designers need to identify the targeted learning objectives, learning experiences and game experiences as clear design goals early on, to efficiently guide the inherently messy design process. Furthermore, the thesis highlights that digital language learning game designers need to develop and nurture knowledge both in the target language instructional domain and in game design.Viime vuosina digitaalisten kielten oppimista varten luotujen pelien ja applikaatioiden määrä on lisääntynyt voimakkaasti. Valtaosa olemassa olevista applikaatioista soveltaa kuitenkin käytäntöjä ja teoreettisia lähestymistapoja, jotka eivät opeta käytännön kielitaitoja. Lisäksi kaupalliset applikaatiot keskittyvät lähinnä kieliin, joilla on suuret markkinat ja eivätkä tue pienempiä kieliä kuten Koreaa.
Tämän opinnäytetyön päämääränä on tutkia kielten oppimisen ja vieraan kielen omaksumisen teoriaa sekä niiden käytännön sovelluksia ja löytää opetusmenetelmiä, jotka soveltuvat parhaiten käytännöllisen Korean kielen taidon opiskeluun. Työn tuloksena löytyi sosiokulttuurilliseen ja ekologiseen kielten omaksumisteoriaan pohjautuvia menetelmiä, jotka integroitiin osaksi opinnäytetyön osana suunniteltua oppimispelikonseptia.
Tämä opinnäytetyö havainnollistaa, että pohjimmiltaan sekavan digitaalisten kieltenopiskelupelien suunnitteluprosessin pohjaaminen kieltenoppimisteoriaan on paitsi mahdollista myös erinomainen lähtökohta suunnittelutyölle. Työn päälöydökset osoittavat, että suunnittelijoiden tulee tunnistaa tavoitellut oppimistavoitteet, oppimiskokemukset ja pelikokemukset ajoissa, jotta suunnittelutyö etenisi tehokkaammin. Lisäksi tämä opinnäytetyö korostaa, että digitaalisten kieltenopiskelupelien suunnittelijoiden tulee perehtyä syvällisesti sekä opiskelun kohteena olevaan kieleen että pelisuunnitteluun
Supporting Scholarly Research Ideation through Web Semantics
We develop new methods and technologies for supporting scholarly research ideation, the tasks in which researchers develop new ideas for their work, through web semantics, computational representations of information found on the web, capturing meaning involving people’s experiences of things of interest. To do so, we first conducted a qualitative study with established researchers on their practices, using sensitizing concepts from information science, creative cognition, and art as a basis for framing and deriving findings. We found that participants engage in and combine a wide range of activities, including citation chaining, exploratory browsing, and curation, to achieve their goals of creative ideation. We derived a new, interdisciplinary model to depict their practices. Our study and findings address a gap in existing research: the creative nature of what researchers do has been insufficiently investigated. The model is expected to guide future investigations.
We then use in-context presentations of dynamically extracted semantic information to (1) address the issues of digression and disorientation, which arise in citation chaining and exploratory browsing, and (2) provide contextual information in researchers’ prior work curation. The implemented interface, Metadata In-Context Explorer (MICE), maintains context while allowing new information to be brought into and integrated with the current context, reducing the needs for switching between documents and webpages. Study shows that MICE supports participants in their citation chaining processes, thus supports scholarly research ideation. MICE is implemented with BigSemantics, a metadata type system and runtime integrating data models, extraction rules, and presentation hints into types. BigSemantics operationalizes type-specific, dynamic extraction and rich presentation of semantic information (a.k.a. metadata) found on the web. The metadata type system, runtime, and MICE are expected to help build interfaces supporting dynamic exploratory search, browsing, and other creative tasks involving complex and interlinked semantics
Phrasing Bimanual Interaction for Visual Design
Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch.
We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes
Co-located Collaborative Information-based Ideation through Embodied Cross-Surface Curation
We develop an embodied cross-surface curation environment to support co-located, collaborative information-based ideation. Information-based ideation (IBI) refers to tasks and activities in which people generate and develop significant new ideas while working with information. Curation is the process of gathering and assembling objects in order to express ideas. The linear media and separated screens of prior curation environments constrain expression.
This research utilizes information composition of rich bookmarks as the medium of curation. Visual representation of elements and ability to combine them in a freeform, spatial manner mimics how objects appear and can be manipulated in the physical world. Metadata of rich bookmarks leverages capabilities of the WWW.
We equip participants with personal IBI environments, each on a mobile device, as a base for contributing to curation on a larger, collaborative surface. We hypothesize that physical representations for the elements and assemblage of curation, layered with physical techniques of interaction, will facilitate co-located IBI. We hypothesize that consistent physical and spatial representations of information and means for manipulating rich bookmarks on and across personal and collaborative surfaces will support IBI. We hypothesize that the small size and weight of personal devices will facilitate participants shifting their attention from their own work to each other and collaboration.
We evaluated the curation environment by inviting couples to participate in a home makeover design task in a living-room lab. We demonstrated that our embodied cross-surface curation environment supports creative thinking, facilitates communication, and stimulates engagement and creativity in collaborative IBI
Grounded Visual Analytics: A New Approach to Discovering Phenomena in Data at Scale
We introduce Grounded Visual Analytics, a new method that integrates qualitative and quantitative approaches in order to help investigators discover patterns about human activity. Investigators who develop or study systems often use log data, which keeps track of interactions their participants perform. Discovering and characterizing patterns in this data is important because it can help guide interactive computing system design. This new approach integrates Visual Analytics, a field that investigates Information Visualization and interactive machine learning, and Grounded Theory, a rigorous qualitative research method for discovering nuanced understanding of qualitative data. This dissertation defines and motivates this new approach, reviews relevant existing tools, builds the Log Timelines system. We present and analyze six case studies that use Log Timelines, a probe that we created in order explore Grounded Visual Analytics. In a series of case studies, we collaborate with a participant-investigator on their own project and data. Their use of Grounded Visual Analytics generates ideas about how future research can bridge the gap between qualitative and quantitative methods
Parametric BIM-based Design Review
This research addressed the need for a new design review technology and method to express the tangible and intangible qualities of architectural experience of parametric BIM-based design projects. The research produced an innovative presentation tool by which parametric design is presented systematically. Focus groups provided assessments of the tool to reveal the usefulness of a parametric BIM-based design review method.
The way in which we visualize architecture affects the way we design and perceive architectural form and performance. Contemporary architectural forms and systems are very complex, yet most architects who use Building Information Modeling (BIM) and generative design methods still embrace the two-dimensional 15th-century Albertian representational methods to express and review design projects. However, architecture cannot be fully perceived through a set of drawings that mediate our perception and evaluation of the built environment.
The systematic and conventional approach of traditional architectural representation, in paper-based and slide-based design reviews, is not able to visualize phenomenal experience nor the inherent variation and versioning of parametric models. Pre-recorded walk-throughs with high quality rendering and imaging have been in use for decades, but high verisimilitude interactive walk-throughs are not commonly used in architectural presentations. The new generations of parametric and BIM systems allow for the quick production of variations in design by varying design parameters and their relationships. However, there is a lack of tools capable of conducting design reviews that engage the advantages of parametric and BIM design projects. Given the multitude of possibilities of in-game interface design, game-engines provide an opportunity for the creation of an interactive, parametric, and performance-oriented experience of architectural projects with multi-design options.
This research has produced a concept for a dynamic presentation and review tool and method intended to meet the needs of parametric design, performance-based evaluation, and optimization of multi-objective design options. The concept is illustrated and tested using a prototype (Parametric Design Review, or PDR) based upon an interactive gaming environment equipped with a novel user interface that simultaneously engages the parametric framework, object parameters, multi-objective optimized design options and their performances with diagrammatic, perspectival, and orthographic representations. The prototype was presented to representative users in multiple focus group sessions. Focus group discussion data reveal that the proposed PDR interface was perceived to be useful if used for design reviews in both academic and professional practice settings
Recommended from our members
Human-Centered Technologies for Inclusive Collection and Analysis of Public-Generated Data
The meteoric rise in the popularity of public engagement platforms such as social media, customer review websites, and public input solicitation efforts strives for establishing an inclusive environment for the public to share their thoughts, ideas, opinions, and experiences. Many decisions made at a personal, local, or national scale are often fueled by data generated by the public. As such, inclusive collection, analysis, sensemaking, and utilization of pubic-generated data are crucial to support the exercise of successful decision-making processes. However, people often struggle to engage, participate, and share their opinions due to inaccessibility, the rigidity of traditional public engagement methods, and the lack of options to provide opinions while avoiding potential confrontations. Concurrently, data analysts and decision-makers grapple with the challenges of analyzing, sensemaking, and making informed decisions based on public-generated data, which includes high dimensionality, ambiguity present in human language, and a lack of tools and techniques catered to their needs. Novel technological interventions are therefore necessary to enable the public to share their input without barriers and allow decision-makers to capture, forage, peruse, and sublimate public-generated data into concrete and actionable insights.
The goal of this dissertation is to demonstrate how human-centered approaches involve the stakeholders in the design, development, and evaluation of tools and techniques that can lead to inclusive, effective, and efficient approaches to public-generated data collection and analysis to support informed decision-making. To that end, in this dissertation, I first addressed the challenges of empowering the public to share their opinions by exploring two major opinion-sharing avenues --- social media and public consultation. To learn more about people\u27s social media experiences and challenges, I built two technology probes and conducted a qualitative exploratory study with 16 participants. This study is followed up by exploring the challenges of inclusive participation during public consultations such as town halls. Based on a formative study with 66 participants and 20 organizers, I designed and developed CommunityClick to enable reticent share their opinions silently and anonymously during town halls. Equipped with the knowledge and experiences from these works, I designed, developed, and evaluated technologies and methods to facilitate and accelerate informed data-driven decision-making based on increased public-generated data. Based on interviews with 14 analysts and decision-makers in the civic domain, I built a visual analytics system CommunityClick that can facilitate public input analysis by surfacing hidden insights, people\u27s reflections, and priorities. Leveraging the lessons learned during this work, I created a visual text analytics system that supports serendipitous discovery and balanced analysis of textual data to help make informed decisions.
In this work, I contribute an understanding of how people collect and analyze public-generated data to fuel their decisions when they have increased exposure to alternative avenues for opinion-sharing. Through a series of human-centered studies, I highlight the challenges that inhibit inclusivity in opinion sharing and shortcomings of existing methods that prevent decision-makers to account for comprehensive public input that includes marginalized or unpopular opinions. To address these challenges, I designed, developed, and evaluated a collection of interactive systems including CommunityClick, CommunityPulse, and Serendyze. Through a rigorous set of evaluation strategies which include creativity sessions, controlled lab studies, in-the-wild deployment, and field experiments, I involved stakeholders to assess the effectiveness and utility of the built systems. Through the empirical evidence from these studies, I demonstrate how alternative designs for social media could enhance people\u27s social media experiences and enable them to make new connections with others to share opinions. In addition, I show how CommunityClick can be utilized to enable reticent attendees during public consultation to share their opinions while avoiding unwanted confrontation and allowing organizers to capture and account for silent feedback. I highlight how CommunityPulse allowed analysts and decision-makers to examine public input from multiple angles for an accelerated analysis and more informed decision-making. Furthermore, I demonstrate how supporting serendipitous discovery and balanced analysis using Serendyze can lead to more informed data-driven decision-making. I conclude the dissertation with a discussion on future avenues to expand this research including the facilitation of multi-user collaborative analysis, integration of multi-modal signals in the analysis of public-generated data, and potential adoption strategies for decision-support systems designed for inclusive collection and analysis of public-generated data
Enhancing interaction in mixed reality
With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences.
We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen.
Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen.
Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern.
Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben
- …