78,527 research outputs found

    Exploring The Impact Of Configuration And Mode Of Input On Group Dynamics In Computing

    Get PDF
    Objectives: Large displays and new technologies for interacting with computers offer a rich area for the development of new tools to facilitate collaborative concept mapping activities. In this thesis, WiiConcept is described as a tool designed to allow the use of multiple WiiRemotes for the collaborative creation of concept maps, with and without gestures. Subsequent investigation of participants' use of the system considers the effect of single and multiple input streams when using the software with and without gestures and the impact upon group concept mapping process outcomes and interactions when using a large display. Methods: Data is presented from an exploratory study of twenty two students who have used the tool. Half of the pairs used two WiiRemotes, while the remainder used one WiiRemote. All pairs created one map without gestures and one map with gestures. Data about their maps, interactions and responses to the tool were collected. Results: Analysis of coded transcripts indicates that one-controller afforded higher levels of interaction, with the use of gestures also increasing the number of interactions seen. Additionally, the result indicated that there were significantly more interactions of the 'shows solidarity', 'gives orientation', and 'gives opinion' categories (defined by the Bales' interaction processes assessment), when using one-controller as opposed to two. Furthermore, there were more interactions for the 'shows solidarity', 'tension release', 'gives orientation' and 'shows tension' categories when using gestures as opposed to the non-use of gestures. Additionally, there were no significant differences in the perceived dominance of individuals, as measured on the social dominance scales, for the amount of interaction displayed, however, there was a significant main effect of group conversational control score on the 'gives orientation' construct, with a higher number of interactions for low, mixed and high scores of this type when dyads had one-controller as opposed to two-controllers. There was also a significant interaction effect of group conversational control score on the 'shows solidarity' construct with a higher number of interactions for all scores of this type when dyads had one-controller as opposed to two-controllers. The results also indicate that for the WiiConcept there was no difference between number of controllers in the detail in the maps, and that all users found the tool to be useful for the collaborative creation of concept maps. At the same time, engaging in disagreement was related to the amount of nodes created with disagreement leading to more nodes being created. Conclusions: Use of one-controller afforded higher levels of interaction, with gestures also increasing the number of interactions seen. If a particular type of interaction is associated with more nodes, there might also be some argument for only using one-controller with gestures enabled to promote cognitive conflict within groups. All participants responded that the tool was relatively easy to use and engaging, which suggests that this tool could be integrated into collaborative concept mapping activities, allowing for greater collaborative knowledge building and sharing of knowledge, due to the increased levels of interaction for one-controller. As research has shown concept mapping can be useful for promoting the understanding of complex ideas, therefore the adoption of the WiiConcept tool as part of a small group learning activity may lead to deeper levels of understanding. Additionally, the use of gestures suggests that this mode of input does not affect the amount of words, nodes, and edges created in a concept map. Further research, over a longer period of time, may see improvement with this form of interaction, with increased mastery of gestural movement leading to greater detail of conceptual mapping

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Pervasive Displays Research: What's Next?

    Get PDF
    Reports on the 7th ACM International Symposium on Pervasive Displays that took place from June 6-8 in Munich, Germany

    GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User

    Get PDF
    Gaze interaction holds a lot of promise for seamless human-computer interaction. At the same time, current wearable mobile eye trackers require user augmentation that negatively impacts natural user behavior while remote trackers require users to position themselves within a confined tracking range. We present GazeDrone, the first system that combines a camera-equipped aerial drone with a computational method to detect sidelong glances for spontaneous (calibration-free) gaze-based interaction with surrounding pervasive systems (e.g., public displays). GazeDrone does not require augmenting each user with on-body sensors and allows interaction from arbitrary positions, even while moving. We demonstrate that drone-supported gaze interaction is feasible and accurate for certain movement types. It is well-perceived by users, in particular while interacting from a fixed position as well as while moving orthogonally or diagonally to a display. We present design implications and discuss opportunities and challenges for drone-supported gaze interaction in public

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    Interaction With Tilting Gestures In Ubiquitous Environments

    Full text link
    In this paper, we introduce a tilting interface that controls direction based applications in ubiquitous environments. A tilt interface is useful for situations that require remote and quick interactions or that are executed in public spaces. We explored the proposed tilting interface with different application types and classified the tilting interaction techniques. Augmenting objects with sensors can potentially address the problem of the lack of intuitive and natural input devices in ubiquitous environments. We have conducted an experiment to test the usability of the proposed tilting interface to compare it with conventional input devices and hand gestures. The experiment results showed greater improvement of the tilt gestures in comparison with hand gestures in terms of speed, accuracy, and user satisfaction.Comment: 13 pages, 10 figure

    Which One is Me?: Identifying Oneself on Public Displays

    Get PDF
    While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays
    corecore