8 research outputs found

    “Like Popcorn”: crossmodal correspondences between scents, 3D shapes and emotions in children

    Get PDF
    There is increasing interest in multisensory experiences in HCI. However, little research considers how sensory modali- ties interact with each other and how this may impact interac- tive experiences. We investigate how children associate emo- tions with scents and 3D shapes. 14 participants (10-17yrs) completed crossmodal association tasks to attribute emo- tional characteristics to variants of the “Bouba/Kiki” stimuli, presented as 3D tangible models, in conjunction with lemon and vanilla scents. Our findings support pre-existing map- pings between shapes and scents, and confirm the associa- tions between the combination of angular shapes (“Kiki”) and lemon scent with arousing emotion, and of round shapes (“Bouba”) and vanilla scent with calming emotion. This ex- tends prior work on crossmodal correspondences in terms of stimuli (3D as opposed to 2D shapes), sample (children), and conveyed content (emotions). We outline how these find- ings can contribute to designing more inclusive interactive multisensory technologies

    Interruption science as a research field: Towards a taxonomy of interruptions as a foundation for the field

    Get PDF
    Interruptions have become ubiquitous in both our personal and professional lives. Accordingly, research on interruptions has also increased steadily over time, and research published in various scientific disciplines has produced different perspectives, fundamental ideas, and conceptualizations of interruptions. However, the current state of research hampers a comprehensive overview of the concept of interruption, predominantly due to the fragmented nature of the existing literature. Reflecting on its genesis in the 1920s and the longstanding research on interruptions, along with recent technological, behavioral, and organizational developments, this paper provides a comprehensive interdisciplinary overview of the various attributes of an interruption, which facilitates the establishment of interruption science as an interdisciplinary research field in the scientific landscape. To obtain an overview of the different interruption attributes, we conducted a systematic literature review with the goal of classifying interruptions. The outcome of our research process is a taxonomy of interruptions, constituting an important foundation for the field. Based on the taxonomy, we also present possible avenues for future research

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Self-adaptive unobtrusive interactions of mobile computing systems

    Full text link
    [EN] In Pervasive Computing environments, people are surrounded by a lot of embedded services. Since pervasive devices, such as mobile devices, have become a key part of our everyday life, they enable users to always be connected to the environment, making demands on one of the most valuable resources of users: human attention. A challenge of the mobile computing systems is regulating the request for users¿ attention. In other words, service interactions should behave in a considerate manner by taking into account the degree to which each service intrudes on the user¿s mind (i.e., the degree of obtrusiveness). The main goal of this paper is to introduce self-adaptive capabilities in mobile computing systems in order to provide non-disturbing interactions. We achieve this by means of an software infrastructure that automatically adapts the service interaction obtrusiveness according to the user¿s context. This infrastructure works from a set of high-level models that define the unobtrusive adaptation behavior and its implication with the interaction resources in a technology-independent way. Our infrastructure has been validated through several experiments to assess its correctness, performance, and the achieved user experience through a user study.This work has been developed with the support of MINECO under the project SMART-ADAPT TIN2013-42981-P, and co-financed by the Generalitat Valenciana under the postdoctoral fellowship APOSTD/2016/042.Gil Pascual, M.; Pelechano Ferragud, V. (2017). Self-adaptive unobtrusive interactions of mobile computing systems. Journal of Ambient Intelligence and Smart Environments. 9(6):659-688. https://doi.org/10.3233/AIS-170463S65968896Aleksy, M., Butter, T., & Schader, M. (2008). Context-Aware Loading for Mobile Applications. Lecture Notes in Computer Science, 12-20. doi:10.1007/978-3-540-85693-1_3Y. Bachvarova, B. van Dijk and A. Nijholt, Towards a unified knowledge-based approach to modality choice, in: Proc. Workshop on Multimodal Output Generation (MOG), 2007, pp. 5–15.Barkhuus, L., & Dey, A. (2003). Is Context-Aware Computing Taking Control away from the User? Three Levels of Interactivity Examined. Lecture Notes in Computer Science, 149-156. doi:10.1007/978-3-540-39653-6_12Bellotti, V., & Edwards, K. (2001). Intelligibility and Accountability: Human Considerations in Context-Aware Systems. Human–Computer Interaction, 16(2-4), 193-212. doi:10.1207/s15327051hci16234_05D. Benavides, P. Trinidad and A. Ruiz-Cortés, Automated reasoning on feature models, in: Proceedings of the 17th International Conference on Advanced Information Systems Engineering, CAiSE’05, Springer-Verlag, Berlin, 2005, pp. 491–503.Bernsen, N. O. (1994). Foundations of multimodal representations: a taxonomy of representational modalities. Interacting with Computers, 6(4), 347-371. doi:10.1016/0953-5438(94)90008-6Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan, A., & Riboni, D. (2010). A survey of context modelling and reasoning techniques. Pervasive and Mobile Computing, 6(2), 161-180. doi:10.1016/j.pmcj.2009.06.002Blumendorf, M., Lehmann, G., & Albayrak, S. (2010). Bridging models and systems at runtime to build adaptive user interfaces. Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’10. doi:10.1145/1822018.1822022D.M. Brown, Communicating Design: Developing Web Site Documentation for Design and Planning, 2nd edn, New Riders Press, 2010.J. Bruin, Statistical Analyses Using SPSS, 2011, http://www.ats.ucla.edu/stat/spss/whatstat/whatstat.htm#1sampt.J. Cámara, G. Moreno and D. Garlan, Reasoning about human participation in self-adaptive systems, in: SEAMS 2015, 2015, pp. 146–156.Campbell, A., & Choudhury, T. (2012). From Smart to Cognitive Phones. IEEE Pervasive Computing, 11(3), 7-11. doi:10.1109/mprv.2012.41Y. Cao, M. Theune and A. Nijholt, Modality effects on cognitive load and performance in high-load information presentation, in: Proceedings of the 14th International Conference on Intelligent User Interfaces, IUI’09, ACM, New York, 2009, pp. 335–344.Chang, F., & Ren, J. (2007). Validating system properties exhibited in execution traces. Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE ’07. doi:10.1145/1321631.1321723H. Chen and J.P. Black, A quantitative approach to non-intrusive computing, in: Mobiquitous’08: Proceedings of the 5th Annual International Conference on Mobile and Ubiquitous Systems, ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, 2008, pp. 1–10.Chittaro, L. (2010). Distinctive aspects of mobile interaction and their implications for the design of multimodal interfaces. Journal on Multimodal User Interfaces, 3(3), 157-165. doi:10.1007/s12193-010-0036-2Clerckx, T., Vandervelpen, C., & Coninx, K. (2008). Task-Based Design and Runtime Support for Multimodal User Interface Distribution. Lecture Notes in Computer Science, 89-105. doi:10.1007/978-3-540-92698-6_6Cook, D. J., & Das, S. K. (2012). Pervasive computing at scale: Transforming the state of the art. Pervasive and Mobile Computing, 8(1), 22-35. doi:10.1016/j.pmcj.2011.10.004Cornelissen, B., Zaidman, A., van Deursen, A., Moonen, L., & Koschke, R. (2009). A Systematic Survey of Program Comprehension through Dynamic Analysis. IEEE Transactions on Software Engineering, 35(5), 684-702. doi:10.1109/tse.2009.28Czarnecki, K. (2004). Generative Software Development. Lecture Notes in Computer Science, 321-321. doi:10.1007/978-3-540-28630-1_33M. de Sá, C. Duarte, L. Carriço and T. Reis, Designing mobile multimodal applications, in: Information Science Reference, 2010, pp. 106–136, Chapter 5.C. Duarte and L. Carriço, A conceptual framework for developing adaptive multimodal applications, in: Proceedings of the 11th International Conference on Intelligent User Interfaces, IUI’06, ACM, New York, 2006, pp. 132–139.Evers, C., Kniewel, R., Geihs, K., & Schmidt, L. (2014). The user in the loop: Enabling user participation for self-adaptive applications. Future Generation Computer Systems, 34, 110-123. doi:10.1016/j.future.2013.12.010Fagin, R., Halpern, J. Y., & Megiddo, N. (1990). A logic for reasoning about probabilities. Information and Computation, 87(1-2), 78-128. doi:10.1016/0890-5401(90)90060-uFerscha, A. (2012). 20 Years Past Weiser: What’s Next? IEEE Pervasive Computing, 11(1), 52-61. doi:10.1109/mprv.2011.78Floch, J., Frà, C., Fricke, R., Geihs, K., Wagner, M., Lorenzo, J., … Scholz, U. (2012). Playing MUSIC - building context-aware and self-adaptive mobile applications. Software: Practice and Experience, 43(3), 359-388. doi:10.1002/spe.2116Gibbs, W. W. (2005). Considerate Computing. Scientific American, 292(1), 54-61. doi:10.1038/scientificamerican0105-54Gil, M., Giner, P., & Pelechano, V. (2011). Personalization for unobtrusive service interaction. Personal and Ubiquitous Computing, 16(5), 543-561. doi:10.1007/s00779-011-0414-0Gil Pascual, M. (s. f.). Adapting Interaction Obtrusiveness: Making Ubiquitous Interactions Less Obnoxious. A Model Driven Engineering approach. doi:10.4995/thesis/10251/31660Haapalainen, E., Kim, S., Forlizzi, J. F., & Dey, A. K. (2010). Psycho-physiological measures for assessing cognitive load. Proceedings of the 12th ACM international conference on Ubiquitous computing - Ubicomp ’10. doi:10.1145/1864349.1864395Hallsteinsen, S., Geihs, K., Paspallis, N., Eliassen, F., Horn, G., Lorenzo, J., … Papadopoulos, G. A. (2012). A development framework and methodology for self-adapting applications in ubiquitous computing environments. Journal of Systems and Software, 85(12), 2840-2859. doi:10.1016/j.jss.2012.07.052Hassenzahl, M. (2004). The Interplay of Beauty, Goodness, and Usability in Interactive Products. Human-Computer Interaction, 19(4), 319-349. doi:10.1207/s15327051hci1904_2Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research agenda. Behaviour & Information Technology, 25(2), 91-97. doi:10.1080/01449290500330331Ho, J., & Intille, S. S. (2005). Using context-aware computing to reduce the perceived burden of interruptions from mobile devices. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’05. doi:10.1145/1054972.1055100Horvitz, E., Kadie, C., Paek, T., & Hovel, D. (2003). Models of attention in computing and communication. Communications of the ACM, 46(3), 52. doi:10.1145/636772.636798Horvitz, E., Koch, P., Sarin, R., Apacible, J., & Subramani, M. (2005). Bayesphone: Precomputation of Context-Sensitive Policies for Inquiry and Action in Mobile Devices. Lecture Notes in Computer Science, 251-260. doi:10.1007/11527886_33Kephart, J. O., & Chess, D. M. (2003). The vision of autonomic computing. Computer, 36(1), 41-50. doi:10.1109/mc.2003.1160055Korpipaa, P., Malm, E.-J., Rantakokko, T., Kyllonen, V., Kela, J., Mantyjarvi, J., … Kansala, I. (2006). Customizing User Interaction in Smart Phones. IEEE Pervasive Computing, 5(3), 82-90. doi:10.1109/mprv.2006.49S. Lemmelä, A. Vetek, K. Mäkelä and D. Trendafilov, Designing and evaluating multimodal interaction for mobile contexts, in: Proceedings of the 10th International Conference on Multimodal Interfaces, ICMI’08, ACM, New York, 2008, pp. 265–272.Lim, B. Y. (2010). Improving trust in context-aware applications with intelligibility. Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Ubicomp ’10. doi:10.1145/1864431.1864491J.-Y. Mao, K. Vredenburg, P.W. Smith and T. Carey, User-centered design methods in practice: A survey of the state of the art, in: Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research, CASCON’01, IBM Press, 2001, p. 12.Maoz, S. (2009). Using Model-Based Traces as Runtime Models. Computer, 42(10), 28-36. doi:10.1109/mc.2009.336Mayer, R. E., & Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist, 38(1), 43-52. doi:10.1207/s15326985ep3801_6Motti, V. G., & Vanderdonckt, J. (2013). A computational framework for context-aware adaptation of user interfaces. IEEE 7th International Conference on Research Challenges in Information Science (RCIS). doi:10.1109/rcis.2013.6577709R. Murch, Autonomic Computing, IBM Press, 2004.Obrenovic, Z., Abascal, J., & Starcevic, D. (2007). Universal accessibility as a multimodal design issue. Communications of the ACM, 50(5), 83-88. doi:10.1145/1230819.1241668Patterson, D. J., Baker, C., Ding, X., Kaufman, S. J., Liu, K., & Zaldivar, A. (2008). Online everywhere. Proceedings of the 10th international conference on Ubiquitous computing - UbiComp ’08. doi:10.1145/1409635.1409645Pielot, M., de Oliveira, R., Kwak, H., & Oliver, N. (2014). Didn’t you see my message? Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14. doi:10.1145/2556288.2556973Poppinga, B., Heuten, W., & Boll, S. (2014). Sensor-Based Identification of Opportune Moments for Triggering Notifications. IEEE Pervasive Computing, 13(1), 22-29. doi:10.1109/mprv.2014.15S. Ramchurn, B. Deitch, M. Thompson, D. De Roure, N. Jennings and M. Luck, Minimising intrusiveness in pervasive computing environments using multi-agent negotiation, in: Mobile and Ubiquitous Systems: Networking and Services, MOBIQUITOUS 2004. The First Annual International Conference on, 2004, pp. 364–371.C. Roda, Human Attention and Its Implications for Human-Computer Interaction, Cambridge University Press, 2011.S. Rosenthal, A.K. Dey and M. Veloso, Using decision-theoretic experience sampling to build personalized mobile phone interruption models, in: Proceedings of the 9th International Conference on Pervasive Computing, Pervasive 2011, Springer-Verlag, Berlin, 2011, pp. 170–187.E. Rukzio, K. Leichtenstern and V. Callaghan, An experimental comparison of physical mobile interaction techniques: Touching, pointing and scanning, in: 8th International Conference on Ubiquitous Computing, UbiComp 2006, Orange County, California, 2006.Serral, E., Valderas, P., & Pelechano, V. (2010). Towards the Model Driven Development of context-aware pervasive systems. Pervasive and Mobile Computing, 6(2), 254-280. doi:10.1016/j.pmcj.2009.07.006D. Siewiorek, A. Smailagic, J. Furukawa, A. Krause, N. Moraveji, K. Reiger, J. Shaffer and F.L. Wong, Sensay: A context-aware mobile phone, in: Proceedings of the 7th IEEE International Symposium on Wearable Computers, ISWC’03, IEEE Computer Society, Washington, 2003, p. 248.Tedre, M. (2006). What should be automated? Proceedings of the 1st ACM international workshop on Human-centered multimedia - HCM ’06. doi:10.1145/1178745.1178753M. Valtonen, A.-M. Vainio and J. Vanhala, Proactive and adaptive fuzzy profile control for mobile phones, in: IEEE International Conference on Pervasive Computing and Communications, 2009, PerCom, 2009, pp. 1–3.Vastenburg, M. H., Keyson, D. V., & de Ridder, H. (2007). Considerate home notification systems: a field study of acceptability of notifications in the home. Personal and Ubiquitous Computing, 12(8), 555-566. doi:10.1007/s00779-007-0176-xWarnock, D., McGee-Lennon, M., & Brewster, S. (2011). The Role of Modality in Notification Performance. Lecture Notes in Computer Science, 572-588. doi:10.1007/978-3-642-23771-3_43Weiser, M., & Brown, J. S. (1997). The Coming Age of Calm Technology. Beyond Calculation, 75-85. doi:10.1007/978-1-4612-0685-9_6Van Woensel, W., Gil, M., Casteleyn, S., Serral, E., & Pelechano, V. (2013). Adapting the Obtrusiveness of Service Interactions in Dynamically Discovered Environments. Mobile and Ubiquitous Systems: Computing, Networking, and Services, 250-262. doi:10.1007/978-3-642-40238-8_2

    Conceptualizing and supporting awareness of collaborative argumentation

    Get PDF
    In this thesis, we introduce “Argue(a)ware”. This is a concept for an instructional group awareness tool which aims at supporting social interactions in co-located computer-supported collaborative argumentation settings. Argue(a)ware is designed to support the social interactions in the content (i.e., task-related) and in the relational (i.e., social and interpersonal) space of co-located collaborative argumentation (Barron, 2003). The support for social interactions in the content space of collaboration is facilitated with the use of collaborative scripts for argumentation (i.e., instructions and scaffolds of argument construction) as well with the use of an argument mapping tool (i.e., visualization of argumentation outcomes in a form of diagrams) (Stegmann, Weinberger, & Fischer, 2007; van Gelder, 2013). The support for social interactions in the relational space of collaboration is facilitated with the use of different awareness mechanisms from the CSCL and the CSCW research fields (i.e., monitoring, mirroring and awareness notification tools). In this thesis, we examined how different awareness mechanisms facilitate the regulation of collaborative processes in the relational space of collaborative argumentation. Moreover, we studied how they affect the perceived team effectiveness (i.e., process outcome) and group performance (i.e., learning outcome) in the content space of collaboration. Thereby, we studied also the effects of the design of the awareness mechanisms on the application of the mechanisms and the user experience with them. In line with the design-based research paradigm, we attempted to simultaneously improve and study the effect of Argue(a)ware on collaborative argumentation (Herrington, McKenney, Reeves & Oliver, 2007). Through a series of design-based research studies we tested and refined the prototypes of the instructional group awareness tool. Moreover, we studied the ecological validity of dominant awareness and instructional theories in the context of co-located computer-supported collaborative argumentation. The underlying premise of the Argue(a)ware tool is that a combination of awareness and instructional support will result in increased awareness of collaboration, which will, in turn, mediate the regulation of collaborative processes. Moreover, we assume that successful regulation of collaboration will result in high perceived team effectiveness and the group performance in turn. In the first phase of development of the Argue(a)ware tool, we built support of the content space of collaborative argumentation with argument scaffold elements in a pedagogical face-to-face macro-script and an argument mapping tool. Furthermore, we extended the use of the script for supporting the relational space of collaboration by embedding awareness prompts for reflecting on collaboration during regular breaks in the script. Following, we designed two variations of the same pedagogical face-to-face macro-script which differ with respect to the type of group awareness prompts they used for supporting the relational space of collaboration i.e. behavioral and social. Upon designing the two script variations, we conducted a longitudinal, multiple-case study with ten groups of Media Informatics master students (n = 28, in groups of three or two, group=case, 4 sessions x70 min, Behavioural Awareness Script group= 5, Social Awareness Script group =5.) where each group was conceptualized as a case. Students collaborated every time for arguing to solve one different ill-structured problem and for transferring their arguments in the argument mapping tool Rationale. Thereby, we intended to investigate the effects of different awareness prompts on (a) collaborative metacognitive processes i.e., regulation, reflection, and evaluation (b) the relation between collaborative metacognitive processes and the quality of collaborative argumentation as well as (c) the impact of the two script variations on perceived team effectiveness and (d) what was experience with the different parts of the script variations in the two groups and how this fits into the design framework by Buder (2011). The quantitative analysis of argument outcomes from the groups yield no significant difference between the groups that worked with the BAS and the SAS variations. No significant difference between the script variations with respect to the results from the team effectiveness questionnaires was found either. Prompts for regulating collaboration processes were found to be the most successfully and consistently applied ones, especially in the most successful cases from both script variations and have influenced the argumentation outcomes. The awareness prompts afforded an explicit feedback display format (e.g. assessment of participation levels of self- and others) through discussion (Buder, 2011). The prompted explicit feedback display format (i.e., ratings of one’s self and of others) was criticized for running only on subjective awareness information on participation, contribution efforts and performance in the role. This resulted in evaluation apprehension phenomena (Cottrell, 1972) and evaluation bias (i.e., users may have not assessed themselves or others frankly) (Ghadirian et al., 2016). The awareness prompts for reflection and evaluation did reveal frictions in the plan making process (i.e., dropping out of the plan for collaboration) in the least successful groups. Problems with group dynamics (i.e., free-loading and presence of dominance) but were not powerful enough to trigger the desired changes in the behaviors of the students. The prompts for evaluating the collaboration in both script variations had no apparent connection to argumentation outcomes. The results indicated that dominant presence phenomena inhibited substantive argumentation in the least successful groups. They also indicated that the role-assignment influenced the group dynamics by helping student’s making clear the labor division in the group. In the second phase of development of the Argue(a)ware tool, the focus is on structuring and regulating social interactions in the relational space of collaborative argumentation by means of scripted roles and role-based awareness scaffolds. We designed support for mirroring participation in the role (i.e., a role-based awareness visualization) and support for monitoring participation, coordination and collaboration efforts in the role (i.e., self-assessment questionnaire). Moreover, we designed additional support for guiding participation in the role i.e., role-based reminders as notifications on smartwatches. In a between-subjects study, ten groups of three university students each (n = 30, Mage =22y, mixed educational backgrounds, 1x90min) worked with two variants of the Argue(a)ware for arguing to solve one ill-structured problem and transferring their arguments in the argument mapping tool Rationale. Next, to that, students should monitor their progress in their role with the role-based awareness visualization and the self-assessment questionnaire with the basic awareness support (role-based awareness visualization with the intermediate self-assessment) and the enhanced awareness support (additional role-based awareness reminders). Half of the groups worked only with the role-based awareness visualization and the self-assessment questionnaire (Basic Awareness Condition-BAC) while the other half groups received additional text-based awareness notifications via smartwatches that were sent to students privately (Enhanced Awareness Condition- EAC). Thereby, we tested the use of different degrees of awareness support in the two conditions with respect to their impact on a) self-perceived awareness of performance in the role and of collaboration and coordination efforts (measured with the same questionnaire at two time points), b) on perceive team effectiveness, c) group performance. We hypothesized that students in EAC will perform better thanks to the additional awareness reminders that increased the directivity and influenced their awareness in the role. The mixed methods analysis revealed that the awareness reminders, when perceived on time, succeeded in guiding collaboration (i.e., resulted in more role-specific behaviors). Students in the EAC condition improved their awareness over time (between the two measurements). These results indicated that enhanced awareness support in the form of additional guidance through awareness reminders can boost the awareness of students’ performance in the role as well as the awareness of their coordination and collaboration efforts over time by directing them back to the mirroring and monitoring tools. Moreover, students in EAC exhibited higher perceived team effectiveness than the students in BAC. However, no significant differences in building of shared mental models or performing in mutual performance monitoring were found between the groups. However, students in BAC and EAC did not differ significantly with respect to the formal correctness or evidence sufficiency of their group argumentation outcomes. Moreover, technical difficulties with the smartphones used as delivery devices for the awareness reminders (i.e., low vibration modus) hindered the timely perception of the reminders and thus their effect on participation. Finally, the questionnaire on the experience with the different parts of Argue(a)ware system indicated the need for exploring further media for supporting the awareness reminders to avoid the overwhelming effects of the multiple displays of the system and enhancing higher perceptiveness of the reminders with low interruption costs for other group members. The rather high satisfaction with the use of the role-based awareness visualization and the positive comments on the motivating aspects of monitoring how the personal success contributes to the group performance indicate that the group mirror succeeded in making group norms visible to group members in a non-obtrusive way. The high interpersonal comparability of performances without moderating the group ‘s interaction directly in the basic awareness condition was proven to be the favored design approach compared to the combination of group mirror and awareness reminders in the enhance awareness condition. In the third phase of development of Argue(a)ware, we focused on designing and testing different notification modes on different ubiquitous mobile devices for facilitating the next prototype of a notification system for role-based awareness reminders. Thereby, the aim of the system was again to guide students’ active participation in collaborative argumentation. More specifically, we focused on raising students’ attention to the reminders and triggering a prompter reaction to the contents of the reminders whilst avoiding a high interruption cost for the primary task (i.e., arguing for solving the problem at hand) in the group. These goals were translated into design challenges for the design of the role-based awareness notification system. The system should afford low interruptions, high reaction and high comprehension of notifications. Notification systems with this particular configuration of IRC values are known as "secondary display" systems (McCrickard et al., 2003). Next, we designed three low-fidelity prototypes for a role-based notification system for delivering awareness reminders: The first ran on a smartwatch and afforded text-based information with vibration and light notification modalities. The second ran on smartphone and afforded text-based information with vibrotactile and light-based notification modalities. Finally, the third prototype run on a smart-ring which afforded graphical- based (i.e. abstract light) information with and light and vibration notification modalities. To test the suitability of these prototypes for acting as “secondary display” systems, we conducted a within-subjects user study where three university students (n= 3, Mage=28, mixed educational background) argued for solving three different problem cases and producing an argument map in each of the three consecutive meetings (max 90min) in the Argue(a)ware instructional system. Students were assigned the roles of writer, corrector and devil`s advocate and were instructed to maintain the same role across the three meetings. In each meeting, students worked with a different role-based awareness notification prototype, where they received a notification indicating their balloon is not growing bigger after five minutes of not exhibiting any role-specific behaviors. The role-based awareness notification prototypes aimed at introducing timely interventions which would prompt students to check on their own progress in the role and the group progress as visualized by the role-based awareness visualization on the large display. Ultimately, this should prompt them to reflect on the awareness information from the visualization and adapt their behaviors to the desired behavior standards over time. Results showed that students perceived the notifications from all media mostly based on vibration cues. Thereby, the vibration cues on the wrist (smartwatch) were considered the least disruptive to the main task compared to the vibration cues on finger (smartwatch) and the vibration cues on the desk (smartphone). Students also declared that vibration cues on wrist prompted the fastest reaction i.e., attending to notification by interacting with the smartwatch. These results indicate that vibration cues on the wrist can be a suitable notification mechanism for increasing the perceived urgency of the message and prompting the reaction on it without causing great distraction to the main task, as studies previous studies showed before (Pielot, Church, & deOliveira, 2013; Hernández-Leo, Balestrini, Nieves & Blat, 2012). Based on very limited qualitative data on light as notification modality and awareness representation type no inferences could be made about its influence on the cost of interruption, reaction and comprehension parameters comprehensiveness. The qualitative and quantitative data on the experience with different media as awareness notification systems indicate that smartwatches may be the most suitable medium for acting as awareness notification medium with a “secondary display” IRC configuration (low-high-high). However, this inference needs to be tested in terms of a follow up study. In the next study, the great limitations of study (limited data due to low power and mal-structured measurement instruments) need to be repaired. Finally, the focus should be on comparing notification modalities of one medium (e.g., smartphone) based on a larger set of participants and with the use of objective measurements for the IRC parameter values (Chewar, McCrickard & Sutcliffe, 2004). Finally, we draw conclusions based on the findings from the three studies with respect to the role of awareness mechanisms for facilitating collaborative processes and outcomes and provide replicable and generalizable design principles. These principles are formed as heuristic statements and are subject to refinement by further research (Bell, Hoadley, & Linn, 2004; Van den Akker, 1999). We conclude with the limitations of the study and ideas for future work with Argue(a)ware

    Conceptualizing and supporting awareness of collaborative argumentation

    Get PDF
    In this thesis, we introduce “Argue(a)ware”. This is a concept for an instructional group awareness tool which aims at supporting social interactions in co-located computer-supported collaborative argumentation settings. Argue(a)ware is designed to support the social interactions in the content (i.e., task-related) and in the relational (i.e., social and interpersonal) space of co-located collaborative argumentation (Barron, 2003). The support for social interactions in the content space of collaboration is facilitated with the use of collaborative scripts for argumentation (i.e., instructions and scaffolds of argument construction) as well with the use of an argument mapping tool (i.e., visualization of argumentation outcomes in a form of diagrams) (Stegmann, Weinberger, & Fischer, 2007; van Gelder, 2013). The support for social interactions in the relational space of collaboration is facilitated with the use of different awareness mechanisms from the CSCL and the CSCW research fields (i.e., monitoring, mirroring and awareness notification tools). In this thesis, we examined how different awareness mechanisms facilitate the regulation of collaborative processes in the relational space of collaborative argumentation. Moreover, we studied how they affect the perceived team effectiveness (i.e., process outcome) and group performance (i.e., learning outcome) in the content space of collaboration. Thereby, we studied also the effects of the design of the awareness mechanisms on the application of the mechanisms and the user experience with them. In line with the design-based research paradigm, we attempted to simultaneously improve and study the effect of Argue(a)ware on collaborative argumentation (Herrington, McKenney, Reeves & Oliver, 2007). Through a series of design-based research studies we tested and refined the prototypes of the instructional group awareness tool. Moreover, we studied the ecological validity of dominant awareness and instructional theories in the context of co-located computer-supported collaborative argumentation. The underlying premise of the Argue(a)ware tool is that a combination of awareness and instructional support will result in increased awareness of collaboration, which will, in turn, mediate the regulation of collaborative processes. Moreover, we assume that successful regulation of collaboration will result in high perceived team effectiveness and the group performance in turn. In the first phase of development of the Argue(a)ware tool, we built support of the content space of collaborative argumentation with argument scaffold elements in a pedagogical face-to-face macro-script and an argument mapping tool. Furthermore, we extended the use of the script for supporting the relational space of collaboration by embedding awareness prompts for reflecting on collaboration during regular breaks in the script. Following, we designed two variations of the same pedagogical face-to-face macro-script which differ with respect to the type of group awareness prompts they used for supporting the relational space of collaboration i.e. behavioral and social. Upon designing the two script variations, we conducted a longitudinal, multiple-case study with ten groups of Media Informatics master students (n = 28, in groups of three or two, group=case, 4 sessions x70 min, Behavioural Awareness Script group= 5, Social Awareness Script group =5.) where each group was conceptualized as a case. Students collaborated every time for arguing to solve one different ill-structured problem and for transferring their arguments in the argument mapping tool Rationale. Thereby, we intended to investigate the effects of different awareness prompts on (a) collaborative metacognitive processes i.e., regulation, reflection, and evaluation (b) the relation between collaborative metacognitive processes and the quality of collaborative argumentation as well as (c) the impact of the two script variations on perceived team effectiveness and (d) what was experience with the different parts of the script variations in the two groups and how this fits into the design framework by Buder (2011). The quantitative analysis of argument outcomes from the groups yield no significant difference between the groups that worked with the BAS and the SAS variations. No significant difference between the script variations with respect to the results from the team effectiveness questionnaires was found either. Prompts for regulating collaboration processes were found to be the most successfully and consistently applied ones, especially in the most successful cases from both script variations and have influenced the argumentation outcomes. The awareness prompts afforded an explicit feedback display format (e.g. assessment of participation levels of self- and others) through discussion (Buder, 2011). The prompted explicit feedback display format (i.e., ratings of one’s self and of others) was criticized for running only on subjective awareness information on participation, contribution efforts and performance in the role. This resulted in evaluation apprehension phenomena (Cottrell, 1972) and evaluation bias (i.e., users may have not assessed themselves or others frankly) (Ghadirian et al., 2016). The awareness prompts for reflection and evaluation did reveal frictions in the plan making process (i.e., dropping out of the plan for collaboration) in the least successful groups. Problems with group dynamics (i.e., free-loading and presence of dominance) but were not powerful enough to trigger the desired changes in the behaviors of the students. The prompts for evaluating the collaboration in both script variations had no apparent connection to argumentation outcomes. The results indicated that dominant presence phenomena inhibited substantive argumentation in the least successful groups. They also indicated that the role-assignment influenced the group dynamics by helping student’s making clear the labor division in the group. In the second phase of development of the Argue(a)ware tool, the focus is on structuring and regulating social interactions in the relational space of collaborative argumentation by means of scripted roles and role-based awareness scaffolds. We designed support for mirroring participation in the role (i.e., a role-based awareness visualization) and support for monitoring participation, coordination and collaboration efforts in the role (i.e., self-assessment questionnaire). Moreover, we designed additional support for guiding participation in the role i.e., role-based reminders as notifications on smartwatches. In a between-subjects study, ten groups of three university students each (n = 30, Mage =22y, mixed educational backgrounds, 1x90min) worked with two variants of the Argue(a)ware for arguing to solve one ill-structured problem and transferring their arguments in the argument mapping tool Rationale. Next, to that, students should monitor their progress in their role with the role-based awareness visualization and the self-assessment questionnaire with the basic awareness support (role-based awareness visualization with the intermediate self-assessment) and the enhanced awareness support (additional role-based awareness reminders). Half of the groups worked only with the role-based awareness visualization and the self-assessment questionnaire (Basic Awareness Condition-BAC) while the other half groups received additional text-based awareness notifications via smartwatches that were sent to students privately (Enhanced Awareness Condition- EAC). Thereby, we tested the use of different degrees of awareness support in the two conditions with respect to their impact on a) self-perceived awareness of performance in the role and of collaboration and coordination efforts (measured with the same questionnaire at two time points), b) on perceive team effectiveness, c) group performance. We hypothesized that students in EAC will perform better thanks to the additional awareness reminders that increased the directivity and influenced their awareness in the role. The mixed methods analysis revealed that the awareness reminders, when perceived on time, succeeded in guiding collaboration (i.e., resulted in more role-specific behaviors). Students in the EAC condition improved their awareness over time (between the two measurements). These results indicated that enhanced awareness support in the form of additional guidance through awareness reminders can boost the awareness of students’ performance in the role as well as the awareness of their coordination and collaboration efforts over time by directing them back to the mirroring and monitoring tools. Moreover, students in EAC exhibited higher perceived team effectiveness than the students in BAC. However, no significant differences in building of shared mental models or performing in mutual performance monitoring were found between the groups. However, students in BAC and EAC did not differ significantly with respect to the formal correctness or evidence sufficiency of their group argumentation outcomes. Moreover, technical difficulties with the smartphones used as delivery devices for the awareness reminders (i.e., low vibration modus) hindered the timely perception of the reminders and thus their effect on participation. Finally, the questionnaire on the experience with the different parts of Argue(a)ware system indicated the need for exploring further media for supporting the awareness reminders to avoid the overwhelming effects of the multiple displays of the system and enhancing higher perceptiveness of the reminders with low interruption costs for other group members. The rather high satisfaction with the use of the role-based awareness visualization and the positive comments on the motivating aspects of monitoring how the personal success contributes to the group performance indicate that the group mirror succeeded in making group norms visible to group members in a non-obtrusive way. The high interpersonal comparability of performances without moderating the group ‘s interaction directly in the basic awareness condition was proven to be the favored design approach compared to the combination of group mirror and awareness reminders in the enhance awareness condition. In the third phase of development of Argue(a)ware, we focused on designing and testing different notification modes on different ubiquitous mobile devices for facilitating the next prototype of a notification system for role-based awareness reminders. Thereby, the aim of the system was again to guide students’ active participation in collaborative argumentation. More specifically, we focused on raising students’ attention to the reminders and triggering a prompter reaction to the contents of the reminders whilst avoiding a high interruption cost for the primary task (i.e., arguing for solving the problem at hand) in the group. These goals were translated into design challenges for the design of the role-based awareness notification system. The system should afford low interruptions, high reaction and high comprehension of notifications. Notification systems with this particular configuration of IRC values are known as "secondary display" systems (McCrickard et al., 2003). Next, we designed three low-fidelity prototypes for a role-based notification system for delivering awareness reminders: The first ran on a smartwatch and afforded text-based information with vibration and light notification modalities. The second ran on smartphone and afforded text-based information with vibrotactile and light-based notification modalities. Finally, the third prototype run on a smart-ring which afforded graphical- based (i.e. abstract light) information with and light and vibration notification modalities. To test the suitability of these prototypes for acting as “secondary display” systems, we conducted a within-subjects user study where three university students (n= 3, Mage=28, mixed educational background) argued for solving three different problem cases and producing an argument map in each of the three consecutive meetings (max 90min) in the Argue(a)ware instructional system. Students were assigned the roles of writer, corrector and devil`s advocate and were instructed to maintain the same role across the three meetings. In each meeting, students worked with a different role-based awareness notification prototype, where they received a notification indicating their balloon is not growing bigger after five minutes of not exhibiting any role-specific behaviors. The role-based awareness notification prototypes aimed at introducing timely interventions which would prompt students to check on their own progress in the role and the group progress as visualized by the role-based awareness visualization on the large display. Ultimately, this should prompt them to reflect on the awareness information from the visualization and adapt their behaviors to the desired behavior standards over time. Results showed that students perceived the notifications from all media mostly based on vibration cues. Thereby, the vibration cues on the wrist (smartwatch) were considered the least disruptive to the main task compared to the vibration cues on finger (smartwatch) and the vibration cues on the desk (smartphone). Students also declared that vibration cues on wrist prompted the fastest reaction i.e., attending to notification by interacting with the smartwatch. These results indicate that vibration cues on the wrist can be a suitable notification mechanism for increasing the perceived urgency of the message and prompting the reaction on it without causing great distraction to the main task, as studies previous studies showed before (Pielot, Church, & deOliveira, 2013; Hernández-Leo, Balestrini, Nieves & Blat, 2012). Based on very limited qualitative data on light as notification modality and awareness representation type no inferences could be made about its influence on the cost of interruption, reaction and comprehension parameters comprehensiveness. The qualitative and quantitative data on the experience with different media as awareness notification systems indicate that smartwatches may be the most suitable medium for acting as awareness notification medium with a “secondary display” IRC configuration (low-high-high). However, this inference needs to be tested in terms of a follow up study. In the next study, the great limitations of study (limited data due to low power and mal-structured measurement instruments) need to be repaired. Finally, the focus should be on comparing notification modalities of one medium (e.g., smartphone) based on a larger set of participants and with the use of objective measurements for the IRC parameter values (Chewar, McCrickard & Sutcliffe, 2004). Finally, we draw conclusions based on the findings from the three studies with respect to the role of awareness mechanisms for facilitating collaborative processes and outcomes and provide replicable and generalizable design principles. These principles are formed as heuristic statements and are subject to refinement by further research (Bell, Hoadley, & Linn, 2004; Van den Akker, 1999). We conclude with the limitations of the study and ideas for future work with Argue(a)ware

    Effects of modality, urgency and situation on responses to multimodal warnings for drivers

    Get PDF
    Signifying road-related events with warnings can be highly beneficial, especially when imminent attention is needed. This thesis describes how modality, urgency and situation can influence driver responses to multimodal displays used as warnings. These displays utilise all combinations of audio, visual and tactile modalities, reflecting different urgency levels. In this way, a new rich set of cues is designed, conveying information multimodally, to enhance reactions during driving, which is a highly visual task. The importance of the signified events to driving is reflected in the warnings, and safety-critical or non-critical situations are communicated through the cues. Novel warning designs are considered, using both abstract displays, with no semantic association to the signified event, and language-based ones, using speech. These two cue designs are compared, to discover their strengths and weaknesses as car alerts. The situations in which the new cues are delivered are varied, by simulating both critical and non-critical events and both manual and autonomous car scenarios. A novel set of guidelines for using multimodal driver displays is finally provided, considering the modalities utilised, the urgency signified, and the situation simulated

    The role of modality in notification performance

    No full text
    Abstract. The primary users of home care technology often have significant sensory impairments. Multimodal interaction can make home care technology more accessible and appropriate, yet most research in the field of multimodal notifications is not aimed at the home but at office or high-pressure environments. This paper presents an experiment that compared the disruptiveness and effectiveness of visual, auditory, tactile and olfactory notifications. The results showed that disruption in the primary task was the same regardless of the notification modality. It was also found that differences in notification effectiveness were due to the inherent traits of a modality, e.g.olfactory notifications were slowest to deliver. The results of this experiment allow researchers and developers to capitalize on the different properties of multimodal techniques, with significant implications for home care technology and technology targeted at users with sensory impairments
    corecore