2,283 research outputs found
Design Strategies for Adaptive Social Composition: Collaborative Sound Environments
In order to develop successful collaborative music systems a variety
of subtle interactions need to be identified and integrated. Gesture
capture, motion tracking, real-time synthesis, environmental
parameters and ubiquitous technologies can each be effectively used
for developing innovative approaches to instrument design, sound
installations, interactive music and generative systems. Current
solutions tend to prioritise one or more of these approaches, refining
a particular interface technology, software design or compositional
approach developed for a specific composition, performer or
installation environment. Within this diverse field a group of novel
controllers, described as âTangible Interfacesâ have been developed.
These are intended for use by novices and in many cases follow a
simple model of interaction controlling synthesis parameters through
simple user actions. Other approaches offer sophisticated
compositional frameworks, but many of these are idiosyncratic and
highly personalised. As such they are difficult to engage with and
ineffective for groups of novices. The objective of this research is to
develop effective design strategies for implementing collaborative
sound environments using key terms and vocabulary drawn from the
available literature. This is articulated by combining an empathic
design process with controlled sound perception and interaction
experiments. The identified design strategies have been applied to
the development of a new collaborative digital instrument. A range
of technical and compositional approaches was considered to define
this process, which can be described as Adaptive Social Composition.
Dan Livingston
The student-produced electronic portfolio in craft education
The authors studied primary school studentsâ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe
Investigando Natural User Interfaces (NUIs) : tecnologias e interação em contexto de acessibilidade
Orientador: Maria CecĂlia Calani BaranauskasTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Natural User Interfaces (NUIs) representam um novo paradigma de interação, com a promessa de ser mais intuitivo e fĂĄcil de usar do que seu antecessor, que utiliza mouse e teclado. Em um contexto no qual as tecnologias estĂŁo cada vez mais invisĂveis e pervasivas, nĂŁo sĂł a quantidade mas tambĂ©m a diversidade de pessoas que participam deste contexto Ă© crescente. Nesse caso, Ă© preciso estudar como esse novo paradigma de interação de fato consegue ser acessĂvel a todas as pessoas que podem utilizĂĄ-lo no dia-a-dia. Ademais, Ă© preciso tambĂ©m caracterizar o paradigma em si, para entender o que o torna, de fato, natural. Portanto, nesta tese apresentamos o caminho que percorremos em busca dessas duas respostas: como caracterizar NUIs, no atual contexto tecnolĂłgico, e como tornar NUIs acessĂveis para todos. Para tanto, primeiro apresentamos uma revisĂŁo sistemĂĄtica de literatura com o estado da arte. Depois, mostramos um conjunto de heurĂsticas para o design e a avaliação de NUIs, que foram aplicadas em estudos de caso prĂĄticos. Em seguida, estruturamos as ideias desta pesquisa dentro dos artefatos da SemiĂłtica Organizacional, e obtivemos esclarecimentos sobre como fazer o design de NUIs com Acessibilidade, seja por meio de Design Universal, seja para propor Tecnologias Assistivas. Depois, apresentamos trĂȘs estudos de caso com sistemas NUI cujo design foi feito por nĂłs. A partir desses estudos de caso, expandimos nosso referencial teĂłrico e conseguimos, por fim, encontrar trĂȘs elementos que resumem a nossa caracterização de NUI: diferenças, affordances e enaçãoAbstract: Natural User Interfaces (NUIs) represent a new interaction paradigm, with the promise of being more intuitive and easy to use than its predecessor, that utilizes mouse and keyboard. In a context where technology is becoming each time more invisible and pervasive, not only the amount but also the diversity of people who participate in this context is increasing. In this case, it must be studied how this new interaction paradigm can, in fact, be accessible to all the people who may use it on their daily routine. Furthermore, it is also necessary to characterize the paradigm itself, to understand what makes it, in fact, natural. Therefore, in this thesis we present the path we took in search of these two answers: how to characterize NUIs in the current technological context, and how to make NUIs accessible to all. To do so, first we present a systematic literature review with the state of the art. Then, we show a set of heuristics for the design and evaluation of NUIs, which were applied in practical study cases. Afterwards, we structure the ideas of this research into the Organizational Semiotics artifacts, and we obtain insights into how to design NUIs with Accessibility, be it through Universal Design, be it to propose Assistive Technologies. Then, we present three case studies with NUI systems which we designed. From these case studies, we expanded our theoretical references were able to, finally, find three elements that sum up our characterization of NUI: differences, affordances and enactionDoutoradoCiĂȘncia da ComputaçãoDoutora em CiĂȘncia da Computação160911/2015-0CAPESCNP
Using Tangible Interaction and Virtual Reality to Support Spatial Perspective Taking Ability
According to several large-scale and longitudinal studies, spatial ability, one of the primary mental abilities, has been shown as a significant predictor for STEM learning (Science, Technology, Engineering, and Mathematics) and career success. Frameworks in HCI (Human-Computer Interaction) and TEI (Tangible and Embodied Interaction) also indicated how the spatial-related aspects of interaction are a common design theme for interfaces using emerging technologies. However, currently only very few interactive systems (using TEI) are designed around a target spatial ability. TEIâs direct effects on spatial ability are also not well-investigated. Meanwhile, a growing body of research from cognitive sciences, such as embodied cognition and Common Coding Theory, shows that physical movements can enhance cognition in aspects that involve spatial thinking. Also, virtual reality (VR) affords better 3D perception for digital environments, and provides design opportunities to engage users with spatial tasks that may not be otherwise imagined or achieved in the real world.
This research describes how we designed and built the system TASC (Tangibles for Augmenting Spatial Cognition), which combines body movement tracking and tangible objects with VR. We recap our design process and design rationales, along with how the finalized system was designed to enhance embodiment as a means to activate, support, engage, and hopefully augment spatial perspective taking ability. We conducted a user study with qualitative and quantitative evaluation methods. Respectively, the qualitative evaluation aimed to understand how the participants used the system; the quantitative evaluation was a multi-condition experiment with pre-tests and post-tests used to investigate if and how the system could improve spatial perspective taking ability. We built the digital pre/post-tests based on PTSOT (Perspective Taking/Spatial Orientation Test) (Hegarty, Kozhevnikov, & Waller, 2008).
The study in total involved 52 participants: 6 participants (3M/3F) in the pilot study, 46 in the main study (3 conditions, around 15 per condition, each condition was overall gender-balanced). The qualitative analysis focused on the VR-TEI condition (the âmain systemâ). Using thematic analysis with the video clips and written notes (both taken during the participantsâ interaction), and audio clips (recorded during the post-interaction interview), we synthesized the qualitative results into 4 themes: (1) Spatial strategies: akin but unique; (2) The use of gestures & verbalization; (3) Positive experience with the system; (4) The potentials of the system. The quantitative statistical analysis, using ANOVA and t-test for the 3-condition experiment, showed that every condition yielded perspective taking improvement from taking the test twice. However, only the VR-TEI condition led to statistically significant improvement. We conclude the research with discussion and future possibilities in these themes of: (a) The role of embodiment; (2) Further explorations of intermediate conditions; (3) A deeper look at sample size and validity; (4) Designing & evaluating TEIs for other spatial abilities; (5) Integration with STEM curriculum. The main contribution of this dissertation is that it reports how a VR-TEI system can be designed, built, and evaluated for a target spatial ability. We hope this research also contributes to bridging some knowledge gaps between interaction design, cognitive science, and STEM learning
A Tangible User Interface for Interactive Data Visualisation
Information visualisation (infovis) tools are integral for the analysis of large abstract data, where interactive processes are adopted to explore data, investigate hypotheses and detect patterns. New technologies exist beyond post-windows, icons, menus and pointing (WIMP), such as tangible user interfaces (TUIs). TUIs expand on the affordance of physical objects and surfaces to better exploit motor and perceptual abilities and allow for the direct manipulation of data.
TUIs have rarely been studied in the field of infovis. The overall aim of this thesis is to design, develop and evaluate a TUI for infovis, using expression quantitative trait loci (eQTL) as a case study. The research began with eliciting eQTL analysis requirements that identified high- level tasks and themes for quantitative genetic and eQTL that were explored in a graphical prototype.
The main contributions of this thesis are as follows. First, a rich set of interface design options for touch and an interactive surface with exclusively tangible objects were explored for the infovis case study. This work includes characterising touch and tangible interactions to understand how best to use them at various levels of metaphoric representation and embodiment. These design were then compared to identify a set of options for a TUI that exploits the advantages of touch and tangible interaction.
Existing research shows computer vision commonly utilised as the TUI technology of choice. This thesis contributes a rigorous technical evaluation of another promising technology, micro-controllers and sensors, as well as computer vision. However the findings showed that some sensors used with micro-controllers are lacking in capability, so computer vision was adopted for the development of the TUI.
The majority of TUIs for infovis are presented as technical developments or design case studies, but lack formal evaluation. The last contribution of this thesis is a quantitative and qualitative comparison of the TUI and touch UI for the infovis case study. Participants adopted more effective strategies to explore patterns and performed fewer unnecessary analyses with the TUI, which led to significantly faster performance. Contrary to common belief bimanual interactions were infrequently used for both interfaces, while epistemic actions were strongly promoted for the TUI and contributed to participantsâ efficient exploration strategies
Recommended from our members
Modular Systems For Fabrication: Toward A Collaborative Partnership Between Humans and Machines
In recent decades, considerable advances have allowed more people to use digital fabrication techniques such as 3D Printing to create personal artifacts. Instead of collaborating with humans to create a design, current fabrication machines, however, mostly follow humans’ commands as one step input in order to output a physical object as a batch process. This way of working presents three big challenges: end-users without special knowledge can not fully appreciate advances of digital fabrication, machines cannot understand people’s design activities during the creative process with improvisation, and fabrication machines are not designed to be collaborative to support individuals’ creative processes with in-situ designs.
In this dissertation, I introduce the research to answer the overarching question: “How can humans and machines form a collaborative partnership in a creative process?” I investigate three elements and their influences at the intersections of HCI, digital fabrication, and collaborative systems to address these three main challenges. I present interactive design tools for end-users to design complex moveable objects(Fabrication-HCI), empirical studies to understand individuals’ design abilities and remaining challenges in developing collaborative fab machines (HCI-Collaborative Systems), and a collaborative 3D printer I built to enable close interactions between users and machines through multiple communication channels and various workflows (Fabrication-Collaborative Systems).
I conclude my dissertation with a vision of an intelligent fabrication agent towards the future of people and machines augmenting each other. I propose new research programs for developing an intelligent machine that detects and predicts human behaviors in creative processes, in order to provide various types of assistance depending on the context, such as guidance, recommendation, and teaching new skills.</p
- âŠ