3 research outputs found
Understanding Effects of Feedback on Group Collaboration
http://www.aaai.org/Press/Reports/Symposia/Spring/ss-09-04.phpSmall group collaboration is vital for any type of organization
to function successfully. Feedback on group
dynamics has been proven to help with the performance
of collaboration. We use sociometric sensors to detect
group dynamics and use the data to give real-time feedback
to people. We are especially interested in the effect
of feedback on distributed collaboration. The goal is to
bridge the gap in distributed groups by detecting and
communicating social signals. We conducted an initial
experiment to test the effects of feedback on brainstorming
and problem solving tasks. The results show
that real-time feedback changes speaking time and interactivity
level of groups. Also in groups with one
or more dominant people, the feedback effectively reduced
the dynamical difference between co-located and
distributed collaboration as well as the behavioral difference
between dominant and non-dominant people.
Interestingly, feedback had a different effect depending
on the type of meeting and types of personality.
We intend to continue this direction of research by personalizing
the visualization by automatically detecting
type of meeting and personality. Moreover we propose
to demonstrate the correlation of group dynamics with
higher level characteristics such as performance, interest
and creativity
Longitudinal video Investigation of dyadic bodily dynamics around the time of word acquisition
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 105-110).Human movement encodes information about internal states and goals. When these goals involve dyadic interactions, such as in language acquisition, the nature of the movement and proximity become representative, allowing parts of our internal states to manifest. We propose an approach called Visually Grounded Virtual Accelerometers (VGVA), to aid with ecologically-valid video analysis investigations, involving humans during dyadic interactions. Utilizing the Human Speechome (HSP) [1] video corpus database, we examine a dyadic interaction paradigm taken from the caregiver-child ecology, during language acquisition. We proceed to characterize human interaction in a video cross-modally; by visually detecting and assessing the child's bodily dynamics in a video, grounded on the caregiver's bodily dynamics of the same video and the related HSP speech transcriptions [2]. Potential applications include analyzing a child's language acquisition, establishing longitudinal diagnostic means for child developmental disorders and generally establishing a metric of effective human communication on dyadic interactions under a video surveillance system. In this thesis, we examine word-learning transcribed video episodes before and after the age of the word's acquisition (AOA). As auditory stimulus is uttered from the caregiver, points along the VGVA tracked sequences corresponding to the onset and post-onset of the child-caregiver bodily responses, are used to longitudinally mark and characterize episodes of word learning. We report a systematic shift in terms of caregiver-child synchrony in motion and turning behavior, tied to exposures of the target word around the time the child begins to understand and thus respond to instances of the spoken word. The systematic shift, diminishes gradually after the age of word acquisition (AOA).by Kleovoulos (Leo) Tsourides.S.M