2,360 research outputs found
Noise Pollution Exhibit at the EcoTarium
Our project team designed, developed, and tested an educational interactive display about noise pollution as part of the new City Science exhibit at the EcoTarium Museum of Science and Nature. Collaborating with the museum staff, we created a Java application to run on a touch screen computer. After we tested the application and developed it into a working prototype, we made recommendations for further ways to enhance the exhibit. Our results show that the exhibit successfully demonstrated how peopleâs responses to noise are subjective, and visitors who used the exhibit often engaged each other in dialogue concerning noise pollution
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
As AI continues to advance, human-AI teams are inevitable. However, progress
in AI is routinely measured in isolation, without a human in the loop. It is
crucial to benchmark progress in AI, not just in isolation, but also in terms
of how it translates to helping humans perform certain tasks, i.e., the
performance of human-AI teams.
In this work, we design a cooperative game - GuessWhich - to measure human-AI
team performance in the specific context of the AI being a visual
conversational agent. GuessWhich involves live interaction between the human
and the AI. The AI, which we call ALICE, is provided an image which is unseen
by the human. Following a brief description of the image, the human questions
ALICE about this secret image to identify it from a fixed pool of images.
We measure performance of the human-ALICE team by the number of guesses it
takes the human to correctly identify the secret image after a fixed number of
dialog rounds with ALICE. We compare performance of the human-ALICE teams for
two versions of ALICE. Our human studies suggest a counterintuitive trend -
that while AI literature shows that one version outperforms the other when
paired with an AI questioner bot, we find that this improvement in AI-AI
performance does not translate to improved human-AI performance. This suggests
a mismatch between benchmarking of AI in isolation and in the context of
human-AI teams.Comment: HCOMP 201
Personalized Profiling and Self-Organization as strategies for the formation and support
Mobile and wireless technologies are globally aware therefore so to do institutions have to think
globally. By this is meant not simply making learning objects available to international students,
but inventing ways to engage students from any geographical location with these objects in such
a way that the outcome is knowledge.
This paper explores the applicability of personalized profiling as a means to link students
studying similar disciplines to each other, and proposes a self-organizing âliving systemsâ model
that aims to overcome present impediments to the creation of sustainable, âopenâ, m-learning
communities.
âOpenâ m-learning communities are characterized by their ability to self organize and adapt to
changing circumstances. Their conceptual framework is systems theoretical, which draws on
understandings about the natural world from the biological and physical sciences. Concepts
such as âopen structureâ, âself organizationâ and âliving systemsâ, have currency in the
discourses of information and computing sciences (i.e., the research fields of artificial life and
artificial intelligence). In the biological scientific view, the sole purpose of a living organism is
to renew itself by opening itself up to its environment, or to another structure. In natural
scientific terms, an organism that is in equilibrium is a dead organism. Living organisms
continually maintain themselves in a state far from equilibrium, which is the state of life.
The transfer of understandings about the operations of living systems is evident in the
approaches of computer game designers and programmers, where âswarmingâ and other
empathetic behaviours of organisms such as bees, fireflies and even stem cells, provide the basis
for the design of software to support massively multi-user on-line gaming. This new knowledge
may have applicability in new approaches to m-learning, for example, through learner selfprofiling
and the automated matching of learner profiles to other learners and learning
opportunities. The first step in this process is that of understanding how the specificities of
emerging mobile and wireless technologies might facilitate open m_learning and the formation
of m-learning communities.Griffith University, Queensland, Australi
Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments
ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience
Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments
ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience
MacNews : an interactive news retrieval service for the Macintosh
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.Includes bibliographical references (leaf 37).by David Andrew Segal.B.S
- âŠ