6,377 research outputs found

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning

    MOG 2007:Workshop on Multimodal Output Generation: CTIT Proceedings

    Get PDF
    This volume brings together presents a wide variety of work offering different perspectives on multimodal generation. Two different strands of work can be distinguished: half of the gathered papers present current work on embodied conversational agents (ECA’s), while the other half presents current work on multimedia applications. Two general research questions are shared by all: what output modalities are most suitable in which situation, and how should different output modalities be combined

    Framing Movements for Gesture Interface Design

    Get PDF
    Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture’ is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this

    Directional adposition use in English, Swedish and Finnish

    Get PDF
    Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003). When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected. We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers. All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion. We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion. Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press. Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press. Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo

    The cockpit for the 21st century

    Get PDF
    Interactive surfaces are a growing trend in many domains. As one possible manifestation of Mark Weiser’s vision of ubiquitous and disappearing computers in everywhere objects, we see touchsensitive screens in many kinds of devices, such as smartphones, tablet computers and interactive tabletops. More advanced concepts of these have been an active research topic for many years. This has also influenced automotive cockpit development: concept cars and recent market releases show integrated touchscreens, growing in size. To meet the increasing information and interaction needs, interactive surfaces offer context-dependent functionality in combination with a direct input paradigm. However, interfaces in the car need to be operable while driving. Distraction, especially visual distraction from the driving task, can lead to critical situations if the sum of attentional demand emerging from both primary and secondary task overextends the available resources. So far, a touchscreen requires a lot of visual attention since its flat surface does not provide any haptic feedback. There have been approaches to make direct touch interaction accessible while driving for simple tasks. Outside the automotive domain, for example in office environments, concepts for sophisticated handling of large displays have already been introduced. Moreover, technological advances lead to new characteristics for interactive surfaces by enabling arbitrary surface shapes. In cars, two main characteristics for upcoming interactive surfaces are largeness and shape. On the one hand, spatial extension is not only increasing through larger displays, but also by taking objects in the surrounding into account for interaction. On the other hand, the flatness inherent in current screens can be overcome by upcoming technologies, and interactive surfaces can therefore provide haptically distinguishable surfaces. This thesis describes the systematic exploration of large and shaped interactive surfaces and analyzes their potential for interaction while driving. Therefore, different prototypes for each characteristic have been developed and evaluated in test settings suitable for their maturity level. Those prototypes were used to obtain subjective user feedback and objective data, to investigate effects on driving and glance behavior as well as usability and user experience. As a contribution, this thesis provides an analysis of the development of interactive surfaces in the car. Two characteristics, largeness and shape, are identified that can improve the interaction compared to conventional touchscreens. The presented studies show that large interactive surfaces can provide new and improved ways of interaction both in driver-only and driver-passenger situations. Furthermore, studies indicate a positive effect on visual distraction when additional static haptic feedback is provided by shaped interactive surfaces. Overall, various, non-exclusively applicable, interaction concepts prove the potential of interactive surfaces for the use in automotive cockpits, which is expected to be beneficial also in further environments where visual attention needs to be focused on additional tasks.Der Einsatz von interaktiven OberflĂ€chen weitet sich mehr und mehr auf die unterschiedlichsten Lebensbereiche aus. Damit sind sie eine mögliche AusprĂ€gung von Mark Weisers Vision der allgegenwĂ€rtigen Computer, die aus unserer direkten Wahrnehmung verschwinden. Bei einer Vielzahl von technischen GerĂ€ten des tĂ€glichen Lebens, wie Smartphones, Tablets oder interaktiven Tischen, sind berĂŒhrungsempfindliche OberflĂ€chen bereits heute in Benutzung. Schon seit vielen Jahren arbeiten Forscher an einer Weiterentwicklung der Technik, um ihre Vorteile auch in anderen Bereichen, wie beispielsweise der Interaktion zwischen Mensch und Automobil, nutzbar zu machen. Und das mit Erfolg: Interaktive BenutzeroberflĂ€chen werden mittlerweile serienmĂ€ĂŸig in vielen Fahrzeugen eingesetzt. Der Einbau von immer grĂ¶ĂŸeren, in das Cockpit integrierten Touchscreens in Konzeptfahrzeuge zeigt, dass sich diese Entwicklung weiter in vollem Gange befindet. Interaktive OberflĂ€chen ermöglichen das flexible Anzeigen von kontextsensitiven Inhalten und machen eine direkte Interaktion mit den Bildschirminhalten möglich. Auf diese Weise erfĂŒllen sie die sich wandelnden Informations- und InteraktionsbedĂŒrfnisse in besonderem Maße. Beim Einsatz von Bedienschnittstellen im Fahrzeug ist die gefahrlose Benutzbarkeit wĂ€hrend der Fahrt von besonderer Bedeutung. Insbesondere visuelle Ablenkung von der Fahraufgabe kann zu kritischen Situationen fĂŒhren, wenn PrimĂ€r- und SekundĂ€raufgaben mehr als die insgesamt verfĂŒgbare Aufmerksamkeit des Fahrers beanspruchen. Herkömmliche Touchscreens stellen dem Fahrer bisher lediglich eine flache OberflĂ€che bereit, die keinerlei haptische RĂŒckmeldung bietet, weshalb deren Bedienung besonders viel visuelle Aufmerksamkeit erfordert. Verschiedene AnsĂ€tze ermöglichen dem Fahrer, direkte Touchinteraktion fĂŒr einfache Aufgaben wĂ€hrend der Fahrt zu nutzen. Außerhalb der Automobilindustrie, zum Beispiel fĂŒr BĂŒroarbeitsplĂ€tze, wurden bereits verschiedene Konzepte fĂŒr eine komplexere Bedienung großer Bildschirme vorgestellt. DarĂŒber hinaus fĂŒhrt der technologische Fortschritt zu neuen möglichen AusprĂ€gungen interaktiver OberflĂ€chen und erlaubt, diese beliebig zu formen. FĂŒr die nĂ€chste Generation von interaktiven OberflĂ€chen im Fahrzeug wird vor allem an der Modifikation der Kategorien GrĂ¶ĂŸe und Form gearbeitet. Die Bedienschnittstelle wird nicht nur durch grĂ¶ĂŸere Bildschirme erweitert, sondern auch dadurch, dass Objekte wie Dekorleisten in die Interaktion einbezogen werden können. Andererseits heben aktuelle Technologieentwicklungen die Restriktion auf flache OberflĂ€chen auf, so dass Touchscreens kĂŒnftig ertastbare Strukturen aufweisen können. Diese Dissertation beschreibt die systematische Untersuchung großer und nicht-flacher interaktiver OberflĂ€chen und analysiert ihr Potential fĂŒr die Interaktion wĂ€hrend der Fahrt. Dazu wurden fĂŒr jede Charakteristik verschiedene Prototypen entwickelt und in Testumgebungen entsprechend ihres Reifegrads evaluiert. Auf diese Weise konnten subjektives Nutzerfeedback und objektive Daten erhoben, und die Effekte auf Fahr- und Blickverhalten sowie Nutzbarkeit untersucht werden. Diese Dissertation leistet den Beitrag einer Analyse der Entwicklung von interaktiven OberflĂ€chen im Automobilbereich. Weiterhin werden die Aspekte GrĂ¶ĂŸe und Form untersucht, um mit ihrer Hilfe die Interaktion im Vergleich zu herkömmlichen Touchscreens zu verbessern. Die durchgefĂŒhrten Studien belegen, dass große FlĂ€chen neue und verbesserte Bedienmöglichkeiten bieten können. Außerdem zeigt sich ein positiver Effekt auf die visuelle Ablenkung, wenn zusĂ€tzliches statisches, haptisches Feedback durch nicht-flache OberflĂ€chen bereitgestellt wird. Zusammenfassend zeigen verschiedene, untereinander kombinierbare Interaktionskonzepte das Potential interaktiver OberflĂ€chen fĂŒr den automotiven Einsatz. Zudem können die Ergebnisse auch in anderen Bereichen Anwendung finden, in denen visuelle Aufmerksamkeit fĂŒr andere Aufgaben benötigt wird

    Movement and force

    Full text link
    How – and why – do things move? How do we describe how they move? This chapter looks at ideas and activities concerning movement and force. It deals with two major issues: firstly, ideas children have about motion and the strategies for teaching about motion in the primary school program. This will include some discussion of the different contexts in which movement and force can be studied. Secondly, it looks at the wider context of studying movement and force, linking it with technology and science as a human endeavour

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Online Instructors’ Gestures For Euclidean Transformations

    Get PDF
    The purpose of this case study was to explore the nature of instructors’ gestures as they teach Euclidean transformations in a synchronous online setting, and to investigate how, if at all, the synchronous online setting impacted the instructors’ intentionality and usage of gestures. The participants in this case study were two collegiate instructors teaching Euclidean transformations to pre-service elementary teachers. The synchronous online instructors’ gestures were captured in detail via two video cameras; one through the screen-capture software built into the online conference platform used to conduct the class and another separate auxiliary camera to capture the gestures that the instructors made outside the view of the screen-capture software. The perceived intentionality of the instructors’ gestures was documented via an hour-long videorecorded interview after teaching the Euclidean transformation unit. The findings indicated that synchronous online instructors make representational gestures and pointing gestures while teaching Euclidean transformations. Specifically, that representational gestures served as a second form of communication for the students while pointing gestures grounded synchronous online instructors’ responses to student contributions within classroom materials. The findings further indicated the combination of the synchronous online instructors’ gestures and language provided a more cohesive picture of the Euclidean transformation as opposed to the gestures alone. Additionally, the findings specified that synchronous online instructors believe the purpose of their gestures was for the benefit of their students as well as for themselves. Finally, the findings highlighted a connection between instructors who previously thought about the potential impact of gestures in the mathematics classroom and intentionally producing gestures. Specifically, critically thinking about gestures within the mathematics classroom before teaching appeared to correspond with more intentional gestures while teaching. Based on these findings, there were three recommendations. The first recommendation was for continued education on gesture as an avenue to communicate mathematical ideas. A professional development workshop may assist collegiate instructors to produce more intentional and mathematically precise gestures. The last two recommendations were for synchronous online instructors to utilize technology that affords students the opportunity to view all of their gestures and for the instructors to explicitly instruct their students to pay attention to their gestures. Knowing that the students can view all of their movements and are specifically looking for gestures might prompt the instructors to gesture with more intentionality and precision

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract
    • 

    corecore