45 research outputs found

    Automatic Posture Correction Utilizing Electrical Muscle Stimulation

    Get PDF
    Habitually poor posture can lead to repetitive strain injuries that lower an individual\u27s quality of life and productivity. Slouching over computer screens and smart phones, asymmetric weight distribution due to uneven leg loading, and improper loading posture are some of the common examples that lead to postural problems and health ramifications. To help cultivate good postural habits, researchers have proposed slouching, balance, and improper loading posture detection systems that alert users through traditional visual, auditory or vibro-tactile feedbacks when posture requires attention. However, such notifications are disruptive and can be easily ignored. We address these issues with a new physiological feedback system that uses sensors to detect these poor postures, and electrical muscle stimulation to automatically correct the poor posture. We compare our automatic approach against other alternative feedback systems and through different unique contexts. We find that our approach outperformed alternative traditional feedback systems by being faster and more accurate while delivering an equally comfortable user experience

    INTERACTION BETWEEN SIGNAL COMPLEXITY AND PHYSICAL ACTIVITY IN VIBRO-TACTILE COMMUNICATION

    Get PDF
    Master'sMASTER OF ART

    First validation of the Haptic Sandwich: a shape changing handheld haptic navigation aid

    Get PDF
    This paper presents the Haptic Sandwich, a handheld robotic device that designed to provide pedestrian navigation instructions through a novel shape changing modality. The device resembles a cube with an articulated upper half that is able to rotate and translate (extend) relative to the bottom half, which is grounded in the user’s hand when the device is held. The poses assumed by the device simultaneously correspond to heading and proximity to a navigational target. The Haptic Sandwich provides an alternative to screen and/or audio based pedestrian navigation technologies for both visually impaired and sighted users. Unlike other robotic or haptic navigational solutions, the haptic sandwich is discrete in terms of form and sensory stimulus. Due to the novel and unexplored nature of shape changing interfaces, two user studies were undertaken to validate the concept and device. In the first experiment, stationary participants attempted to identify poses assumed by the device, which was hidden from view. In the second experiment, participants attempted to locate a sequence of invisible navigational targets while walking with the device. Of 1080 pose presentations to 10 individuals in experiment one, 80% were correctly identified and 17.5% had the minimal possible error. Multi-DOF errors accounted for only 1.1% of all answers. The role of simultaneous or independent actuator motion on final shape perception was tested with no significant performance difference. The rotation and extension DOF had significantly different perception accuracy. In the second experiment, participants demonstrated good navigational ability with the device after minimal training and were able to locate all presented targets. Mean motion efficiency of the participants was between 32%-56%. Participants made use of both DOF

    An enactive approach to perceptual augmentation in mobility

    Get PDF
    Event predictions are an important constituent of situation awareness, which is a key objective for many applications in human-machine interaction, in particular in driver assistance. This work focuses on facilitating event predictions in dynamic environments. Its primary contributions are 1) the theoretical development of an approach for enabling people to expand their sampling and understanding of spatiotemporal information, 2) the introduction of exemplary systems that are guided by this approach, 3) the empirical investigation of effects functional prototypes of these systems have on human behavior and safety in a range of simulated road traffic scenarios, and 4) a connection of the investigated approach to work on cooperative human-machine systems. More specific contents of this work are summarized as follows: The first part introduces several challenges for the formation of situation awareness as a requirement for safe traffic participation. It reviews existing work on these challenges in the domain of driver assistance, resulting in an identification of the need to better inform drivers about dynamically changing aspects of a scene, including event probabilities, spatial and temporal distances, as well as a suggestion to expand the scope of assistance systems to start informing drivers about relevant scene elements at an early stage. Novel forms of assistance can be guided by different fundamental approaches that target either replacement, distribution, or augmentation of driver competencies. A subsequent differentiation of these approaches concludes that an augmentation-guided paradigm, characterized by an integration of machine capabilities into human feedback loops, can be advantageous for tasks that rely on active user engagement, the preservation of awareness and competence, and the minimization of complexity in human- machine interaction. Consequently, findings and theories about human sensorimotor processes are connected to develop an enactive approach that is consistent with an augmentation perspective on human-machine interaction. The approach is characterized by enabling drivers to exercise new sensorimotor processes through which safety-relevant spatiotemporal information may be sampled. In the second part of this work, a concept and functional prototype for augmenting the perception of traffic dynamics is introduced as a first example for applying principles of this enactive approach. As a loose expression of functional biomimicry, the prototype utilizes a tactile inter- face that communicates temporal distances to potential hazards continuously through stimulus intensity. In a driving simulator study, participants quickly gained an intuitive understanding of the assistance without instructions and demonstrated higher driving safety in safety-critical highway scenarios. But this study also raised new questions such as whether benefits are due to a continuous time-intensity encoding and whether utility generalizes to intersection scenarios or highway driving with low criticality events. Effects of an expanded assistance prototype with lane-independent risk assessment and an option for binary signaling were thus investigated in a separate driving simulator study. Subjective responses confirmed quick signal understanding and a perception of spatial and temporal stimulus characteristics. Surprisingly, even for a binary assistance variant with a constant intensity level, participants reported perceiving a danger-dependent variation in stimulus intensity. They further felt supported by the system in the driving task, especially in difficult situations. But in contrast to the first study, this support was not expressed by changes in driving safety, suggesting that perceptual demands of the low criticality scenarios could be satisfied by existing driver capabilities. But what happens if such basic capabilities are impaired, e.g., due to poor visibility conditions or other situations that introduce perceptual uncertainty? In a third driving simulator study, the driver assistance was employed specifically in such ambiguous situations and produced substantial safety advantages over unassisted driving. Additionally, an assistance variant that adds an encoding of spatial uncertainty was investigated in these scenarios. Participants had no difficulties to understand and utilize this added signal dimension to improve safety. Despite being inherently less informative than spatially precise signals, users rated uncertainty-encoding signals as equally useful and satisfying. This appreciation for transparency of variable assistance reliability is a promising indicator for the feasibility of an adaptive trust calibration in human-machine interaction and marks one step towards a closer integration of driver and vehicle capabilities. A complementary step on the driver side would be to increase transparency about the driver’s mental states and thus allow for mutual adaptation. The final part of this work discusses how such prerequisites of cooperation may be achieved by monitoring mental state correlates observable in human behavior, especially in eye movements. Furthermore, the outlook for an addition of cooperative features also raises new questions about the bounds of identity as well as practical consequences of human-machine systems in which co-adapting agents may exercise sensorimotor processes through one another.Die Vorhersage von Ereignissen ist ein Bestandteil des Situationsbewusstseins, dessen UnterstĂŒtzung ein wesentliches Ziel diverser Anwendungen im Bereich Mensch-Maschine Interaktion ist, insbesondere in der Fahrerassistenz. Diese Arbeit zeigt Möglichkeiten auf, Menschen bei Vorhersagen in dynamischen Situationen im Straßenverkehr zu unterstĂŒtzen. Zentrale BeitrĂ€ge der Arbeit sind 1) eine theoretische Auseinandersetzung mit der Aufgabe, die menschliche Wahrnehmung und das VerstĂ€ndnis von raum-zeitlichen Informationen im Straßenverkehr zu erweitern, 2) die EinfĂŒhrung beispielhafter Systeme, die aus dieser Betrachtung hervorgehen, 3) die empirische Untersuchung der Auswirkungen dieser Systeme auf das Nutzerverhalten und die Fahrsicherheit in simulierten Verkehrssituationen und 4) die VerknĂŒpfung der untersuchten AnsĂ€tze mit Arbeiten an kooperativen Mensch-Maschine Systemen. Die Arbeit ist in drei Teile gegliedert: Der erste Teil stellt einige Herausforderungen bei der Bildung von Situationsbewusstsein vor, welches fĂŒr die sichere Teilnahme am Straßenverkehr notwendig ist. Aus einem Vergleich dieses Überblicks mit frĂŒheren Arbeiten zeigt sich, dass eine Notwendigkeit besteht, Fahrer besser ĂŒber dynamische Aspekte von Fahrsituationen zu informieren. Dies umfasst unter anderem Ereigniswahrscheinlichkeiten, rĂ€umliche und zeitliche Distanzen, sowie eine frĂŒhere Signalisierung relevanter Elemente in der Umgebung. Neue Formen der Assistenz können sich an verschiedenen grundlegenden AnsĂ€tzen der Mensch-Maschine Interaktion orientieren, die entweder auf einen Ersatz, eine Verteilung oder eine Erweiterung von Fahrerkompetenzen abzielen. Die Differenzierung dieser AnsĂ€tze legt den Schluss nahe, dass ein von Kompetenzerweiterung geleiteter Ansatz fĂŒr die BewĂ€ltigung jener Aufgaben von Vorteil ist, bei denen aktiver Nutzereinsatz, die Erhaltung bestehender Kompetenzen und Situationsbewusstsein gefordert sind. Im Anschluss werden Erkenntnisse und Theorien ĂŒber menschliche sensomotorische Prozesse verknĂŒpft, um einen enaktiven Ansatz der Mensch-Maschine Interaktion zu entwickeln, der einer erweiterungsgeleiteten Perspektive Rechnung trĂ€gt. Dieser Ansatz soll es Fahrern ermöglichen, sicherheitsrelevante raum-zeitliche Informationen ĂŒber neue sensomotorische Prozesse zu erfassen. Im zweiten Teil der Arbeit wird ein Konzept und funktioneller Prototyp zur Erweiterung der Wahrnehmung von Verkehrsdynamik als ein erstes Beispiel zur Anwendung der Prinzipien dieses enaktiven Ansatzes vorgestellt. Dieser Prototyp nutzt vibrotaktile Aktuatoren zur Kommunikation von Richtungen und zeitlichen Distanzen zu möglichen Gefahrenquellen ĂŒber die Aktuatorposition und -intensitĂ€t. Teilnehmer einer Fahrsimulationsstudie waren in der Lage, in kurzer Zeit ein intuitives VerstĂ€ndnis dieser Assistenz zu entwickeln, ohne vorher ĂŒber die FunktionalitĂ€t unterrichtet worden zu sein. Sie zeigten zudem ein erhöhtes Maß an Fahrsicherheit in kritischen Verkehrssituationen. Doch diese Studie wirft auch neue Fragen auf, beispielsweise, ob der Sicherheitsgewinn auf kontinuierliche Distanzkodierung zurĂŒckzufĂŒhren ist und ob ein Nutzen auch in weiteren Szenarien vorliegen wĂŒrde, etwa bei Kreuzungen und weniger kritischem longitudinalen Verkehr. Um diesen Fragen nachzugehen, wurden Effekte eines erweiterten Prototypen mit spurunabhĂ€ngiger KollisionsprĂ€diktion, sowie einer Option zur binĂ€ren Kommunikation möglicher Kollisionsrichtungen in einer weiteren Fahrsimulatorstudie untersucht. Auch in dieser Studie bestĂ€tigen die subjektiven Bewertungen ein schnelles VerstĂ€ndnis der Signale und eine Wahrnehmung rĂ€umlicher und zeitlicher Signalkomponenten. Überraschenderweise berichteten Teilnehmer grĂ¶ĂŸtenteils auch nach der Nutzung einer binĂ€ren Assistenzvariante, dass sie eine gefahrabhĂ€ngige Variation in der IntensitĂ€t von taktilen Stimuli wahrgenommen hĂ€tten. Die Teilnehmer fĂŒhlten sich mit beiden Varianten in der Fahraufgabe unterstĂŒtzt, besonders in Situationen, die von ihnen als kritisch eingeschĂ€tzt wurden. Im Gegensatz zur ersten Studie hat sich diese gefĂŒhlte UnterstĂŒtzung nur geringfĂŒgig in einer messbaren SicherheitsverĂ€nderung widergespiegelt. Dieses Ergebnis deutet darauf hin, dass die Wahrnehmungsanforderungen der Szenarien mit geringer KritikalitĂ€t mit den vorhandenen FahrerkapazitĂ€ten erfĂŒllt werden konnten. Doch was passiert, wenn diese FĂ€higkeiten eingeschrĂ€nkt werden, beispielsweise durch schlechte Sichtbedingungen oder Situationen mit erhöhter AmbiguitĂ€t? In einer dritten Fahrsimulatorstudie wurde das Assistenzsystem in speziell solchen Situationen eingesetzt, was zu substantiellen Sicherheitsvorteilen gegenĂŒber unassistiertem Fahren gefĂŒhrt hat. ZusĂ€tzlich zu der vorher eingefĂŒhrten Form wurde eine neue Variante des Prototyps untersucht, welche rĂ€umliche Unsicherheiten der Fahrzeugwahrnehmung in taktilen Signalen kodiert. Studienteilnehmer hatten keine Schwierigkeiten, diese zusĂ€tzliche Signaldimension zu verstehen und die Information zur Verbesserung der Fahrsicherheit zu nutzen. Obwohl sie inherent weniger informativ sind als rĂ€umlich prĂ€zise Signale, bewerteten die Teilnehmer die Signale, die die Unsicherheit ĂŒbermitteln, als ebenso nĂŒtzlich und zufriedenstellend. Solch eine WertschĂ€tzung fĂŒr die Transparenz variabler InformationsreliabilitĂ€t ist ein vielversprechendes Indiz fĂŒr die Möglichkeit einer adaptiven Vertrauenskalibrierung in der Mensch-Maschine Interaktion. Dies ist ein Schritt hin zur einer engeren Integration der FĂ€higkeiten von Fahrer und Fahrzeug. Ein komplementĂ€rer Schritt wĂ€re eine Erweiterung der Transparenz mentaler ZustĂ€nde des Fahrers, wodurch eine wechselseitige Anpassung von Mensch und Maschine möglich wĂ€re. Der letzte Teil dieser Arbeit diskutiert, wie diese Transparenz und weitere Voraussetzungen von Mensch-Maschine Kooperation erfĂŒllt werden könnten, indem etwa Korrelate mentaler ZustĂ€nde, insbesondere ĂŒber das Blickverhalten, ĂŒberwacht werden. Des Weiteren ergeben sich mit Blick auf zusĂ€tzliche kooperative FĂ€higkeiten neue Fragen ĂŒber die Definition von IdentitĂ€t, sowie ĂŒber die praktischen Konsequenzen von Mensch-Maschine Systemen, in denen ko-adaptive Agenten sensomotorische Prozesse vermittels einander ausĂŒben können

    Conceptualizing and supporting awareness of collaborative argumentation

    Get PDF
    In this thesis, we introduce “Argue(a)ware”. This is a concept for an instructional group awareness tool which aims at supporting social interactions in co-located computer-supported collaborative argumentation settings. Argue(a)ware is designed to support the social interactions in the content (i.e., task-related) and in the relational (i.e., social and interpersonal) space of co-located collaborative argumentation (Barron, 2003). The support for social interactions in the content space of collaboration is facilitated with the use of collaborative scripts for argumentation (i.e., instructions and scaffolds of argument construction) as well with the use of an argument mapping tool (i.e., visualization of argumentation outcomes in a form of diagrams) (Stegmann, Weinberger, & Fischer, 2007; van Gelder, 2013). The support for social interactions in the relational space of collaboration is facilitated with the use of different awareness mechanisms from the CSCL and the CSCW research fields (i.e., monitoring, mirroring and awareness notification tools). In this thesis, we examined how different awareness mechanisms facilitate the regulation of collaborative processes in the relational space of collaborative argumentation. Moreover, we studied how they affect the perceived team effectiveness (i.e., process outcome) and group performance (i.e., learning outcome) in the content space of collaboration. Thereby, we studied also the effects of the design of the awareness mechanisms on the application of the mechanisms and the user experience with them. In line with the design-based research paradigm, we attempted to simultaneously improve and study the effect of Argue(a)ware on collaborative argumentation (Herrington, McKenney, Reeves & Oliver, 2007). Through a series of design-based research studies we tested and refined the prototypes of the instructional group awareness tool. Moreover, we studied the ecological validity of dominant awareness and instructional theories in the context of co-located computer-supported collaborative argumentation. The underlying premise of the Argue(a)ware tool is that a combination of awareness and instructional support will result in increased awareness of collaboration, which will, in turn, mediate the regulation of collaborative processes. Moreover, we assume that successful regulation of collaboration will result in high perceived team effectiveness and the group performance in turn. In the first phase of development of the Argue(a)ware tool, we built support of the content space of collaborative argumentation with argument scaffold elements in a pedagogical face-to-face macro-script and an argument mapping tool. Furthermore, we extended the use of the script for supporting the relational space of collaboration by embedding awareness prompts for reflecting on collaboration during regular breaks in the script. Following, we designed two variations of the same pedagogical face-to-face macro-script which differ with respect to the type of group awareness prompts they used for supporting the relational space of collaboration i.e. behavioral and social. Upon designing the two script variations, we conducted a longitudinal, multiple-case study with ten groups of Media Informatics master students (n = 28, in groups of three or two, group=case, 4 sessions x70 min, Behavioural Awareness Script group= 5, Social Awareness Script group =5.) where each group was conceptualized as a case. Students collaborated every time for arguing to solve one different ill-structured problem and for transferring their arguments in the argument mapping tool Rationale. Thereby, we intended to investigate the effects of different awareness prompts on (a) collaborative metacognitive processes i.e., regulation, reflection, and evaluation (b) the relation between collaborative metacognitive processes and the quality of collaborative argumentation as well as (c) the impact of the two script variations on perceived team effectiveness and (d) what was experience with the different parts of the script variations in the two groups and how this fits into the design framework by Buder (2011). The quantitative analysis of argument outcomes from the groups yield no significant difference between the groups that worked with the BAS and the SAS variations. No significant difference between the script variations with respect to the results from the team effectiveness questionnaires was found either. Prompts for regulating collaboration processes were found to be the most successfully and consistently applied ones, especially in the most successful cases from both script variations and have influenced the argumentation outcomes. The awareness prompts afforded an explicit feedback display format (e.g. assessment of participation levels of self- and others) through discussion (Buder, 2011). The prompted explicit feedback display format (i.e., ratings of one’s self and of others) was criticized for running only on subjective awareness information on participation, contribution efforts and performance in the role. This resulted in evaluation apprehension phenomena (Cottrell, 1972) and evaluation bias (i.e., users may have not assessed themselves or others frankly) (Ghadirian et al., 2016). The awareness prompts for reflection and evaluation did reveal frictions in the plan making process (i.e., dropping out of the plan for collaboration) in the least successful groups. Problems with group dynamics (i.e., free-loading and presence of dominance) but were not powerful enough to trigger the desired changes in the behaviors of the students. The prompts for evaluating the collaboration in both script variations had no apparent connection to argumentation outcomes. The results indicated that dominant presence phenomena inhibited substantive argumentation in the least successful groups. They also indicated that the role-assignment influenced the group dynamics by helping student’s making clear the labor division in the group. In the second phase of development of the Argue(a)ware tool, the focus is on structuring and regulating social interactions in the relational space of collaborative argumentation by means of scripted roles and role-based awareness scaffolds. We designed support for mirroring participation in the role (i.e., a role-based awareness visualization) and support for monitoring participation, coordination and collaboration efforts in the role (i.e., self-assessment questionnaire). Moreover, we designed additional support for guiding participation in the role i.e., role-based reminders as notifications on smartwatches. In a between-subjects study, ten groups of three university students each (n = 30, Mage =22y, mixed educational backgrounds, 1x90min) worked with two variants of the Argue(a)ware for arguing to solve one ill-structured problem and transferring their arguments in the argument mapping tool Rationale. Next, to that, students should monitor their progress in their role with the role-based awareness visualization and the self-assessment questionnaire with the basic awareness support (role-based awareness visualization with the intermediate self-assessment) and the enhanced awareness support (additional role-based awareness reminders). Half of the groups worked only with the role-based awareness visualization and the self-assessment questionnaire (Basic Awareness Condition-BAC) while the other half groups received additional text-based awareness notifications via smartwatches that were sent to students privately (Enhanced Awareness Condition- EAC). Thereby, we tested the use of different degrees of awareness support in the two conditions with respect to their impact on a) self-perceived awareness of performance in the role and of collaboration and coordination efforts (measured with the same questionnaire at two time points), b) on perceive team effectiveness, c) group performance. We hypothesized that students in EAC will perform better thanks to the additional awareness reminders that increased the directivity and influenced their awareness in the role. The mixed methods analysis revealed that the awareness reminders, when perceived on time, succeeded in guiding collaboration (i.e., resulted in more role-specific behaviors). Students in the EAC condition improved their awareness over time (between the two measurements). These results indicated that enhanced awareness support in the form of additional guidance through awareness reminders can boost the awareness of students’ performance in the role as well as the awareness of their coordination and collaboration efforts over time by directing them back to the mirroring and monitoring tools. Moreover, students in EAC exhibited higher perceived team effectiveness than the students in BAC. However, no significant differences in building of shared mental models or performing in mutual performance monitoring were found between the groups. However, students in BAC and EAC did not differ significantly with respect to the formal correctness or evidence sufficiency of their group argumentation outcomes. Moreover, technical difficulties with the smartphones used as delivery devices for the awareness reminders (i.e., low vibration modus) hindered the timely perception of the reminders and thus their effect on participation. Finally, the questionnaire on the experience with the different parts of Argue(a)ware system indicated the need for exploring further media for supporting the awareness reminders to avoid the overwhelming effects of the multiple displays of the system and enhancing higher perceptiveness of the reminders with low interruption costs for other group members. The rather high satisfaction with the use of the role-based awareness visualization and the positive comments on the motivating aspects of monitoring how the personal success contributes to the group performance indicate that the group mirror succeeded in making group norms visible to group members in a non-obtrusive way. The high interpersonal comparability of performances without moderating the group ‘s interaction directly in the basic awareness condition was proven to be the favored design approach compared to the combination of group mirror and awareness reminders in the enhance awareness condition. In the third phase of development of Argue(a)ware, we focused on designing and testing different notification modes on different ubiquitous mobile devices for facilitating the next prototype of a notification system for role-based awareness reminders. Thereby, the aim of the system was again to guide students’ active participation in collaborative argumentation. More specifically, we focused on raising students’ attention to the reminders and triggering a prompter reaction to the contents of the reminders whilst avoiding a high interruption cost for the primary task (i.e., arguing for solving the problem at hand) in the group. These goals were translated into design challenges for the design of the role-based awareness notification system. The system should afford low interruptions, high reaction and high comprehension of notifications. Notification systems with this particular configuration of IRC values are known as "secondary display" systems (McCrickard et al., 2003). Next, we designed three low-fidelity prototypes for a role-based notification system for delivering awareness reminders: The first ran on a smartwatch and afforded text-based information with vibration and light notification modalities. The second ran on smartphone and afforded text-based information with vibrotactile and light-based notification modalities. Finally, the third prototype run on a smart-ring which afforded graphical- based (i.e. abstract light) information with and light and vibration notification modalities. To test the suitability of these prototypes for acting as “secondary display” systems, we conducted a within-subjects user study where three university students (n= 3, Mage=28, mixed educational background) argued for solving three different problem cases and producing an argument map in each of the three consecutive meetings (max 90min) in the Argue(a)ware instructional system. Students were assigned the roles of writer, corrector and devil`s advocate and were instructed to maintain the same role across the three meetings. In each meeting, students worked with a different role-based awareness notification prototype, where they received a notification indicating their balloon is not growing bigger after five minutes of not exhibiting any role-specific behaviors. The role-based awareness notification prototypes aimed at introducing timely interventions which would prompt students to check on their own progress in the role and the group progress as visualized by the role-based awareness visualization on the large display. Ultimately, this should prompt them to reflect on the awareness information from the visualization and adapt their behaviors to the desired behavior standards over time. Results showed that students perceived the notifications from all media mostly based on vibration cues. Thereby, the vibration cues on the wrist (smartwatch) were considered the least disruptive to the main task compared to the vibration cues on finger (smartwatch) and the vibration cues on the desk (smartphone). Students also declared that vibration cues on wrist prompted the fastest reaction i.e., attending to notification by interacting with the smartwatch. These results indicate that vibration cues on the wrist can be a suitable notification mechanism for increasing the perceived urgency of the message and prompting the reaction on it without causing great distraction to the main task, as studies previous studies showed before (Pielot, Church, & deOliveira, 2013; Hernández-Leo, Balestrini, Nieves & Blat, 2012). Based on very limited qualitative data on light as notification modality and awareness representation type no inferences could be made about its influence on the cost of interruption, reaction and comprehension parameters comprehensiveness. The qualitative and quantitative data on the experience with different media as awareness notification systems indicate that smartwatches may be the most suitable medium for acting as awareness notification medium with a “secondary display” IRC configuration (low-high-high). However, this inference needs to be tested in terms of a follow up study. In the next study, the great limitations of study (limited data due to low power and mal-structured measurement instruments) need to be repaired. Finally, the focus should be on comparing notification modalities of one medium (e.g., smartphone) based on a larger set of participants and with the use of objective measurements for the IRC parameter values (Chewar, McCrickard & Sutcliffe, 2004). Finally, we draw conclusions based on the findings from the three studies with respect to the role of awareness mechanisms for facilitating collaborative processes and outcomes and provide replicable and generalizable design principles. These principles are formed as heuristic statements and are subject to refinement by further research (Bell, Hoadley, & Linn, 2004; Van den Akker, 1999). We conclude with the limitations of the study and ideas for future work with Argue(a)ware

    Enriching mobile interaction with garment-based wearable computing devices

    Get PDF
    Wearable computing is on the brink of moving from research to mainstream. The first simple products, such as fitness wristbands and smart watches, hit the mass market and achieved considerable market penetration. However, the number and versatility of research prototypes in the field of wearable computing is far beyond the available devices on the market. Particularly, smart garments as a specific type of wearable computer, have high potential to change the way we interact with computing systems. Due to the proximity to the user`s body, smart garments allow to unobtrusively sense implicit and explicit user input. Smart garments are capable of sensing physiological information, detecting touch input, and recognizing the movement of the user. In this thesis, we explore how smart garments can enrich mobile interaction. Employing a user-centered design process, we demonstrate how different input and output modalities can enrich interaction capabilities of mobile devices such as mobile phones or smart watches. To understand the context of use, we chart the design space for mobile interaction through wearable devices. We focus on the device placement on the body as well as interaction modality. We use a probe-based research approach to systematically investigate the possible inputs and outputs for garment based wearable computing devices. We develop six different research probes showing how mobile interaction benefits from wearable computing devices and what requirements these devices pose for mobile operating systems. On the input side, we look at explicit input using touch and mid-air gestures as well as implicit input using physiological signals. Although touch input is well known from mobile devices, the limited screen real estate as well as the occlusion of the display by the input finger are challenges that can be overcome with touch-enabled garments. Additionally, mid-air gestures provide a more sophisticated and abstract form of input. We present a gesture elicitation study to address the special requirements of mobile interaction and present the resulting gesture set. As garments are worn, they allow different physiological signals to be sensed. We explore how we can leverage these physiological signals for implicit input. We conduct a study assessing physiological information by focusing on the workload of drivers in an automotive setting. We show that we can infer the driverÂŽs workload using these physiological signals. Beside the input capabilities of garments, we explore how garments can be used as output. We present research probes covering the most important output modalities, namely visual, auditory, and haptic. We explore how low resolution displays can serve as a context display and how and where content should be placed on such a display. For auditory output, we investigate a novel authentication mechanism utilizing the closeness of wearable devices to the body. We show that by probing audio cues through the head of the user and re-recording them, user authentication is feasible. Last, we investigate EMS as a haptic feedback method. We show that by actuating the user`s body, an embodied form of haptic feedback can be achieved. From the aforementioned research probes, we distilled a set of design recommendations. These recommendations are grouped into interaction-based and technology-based recommendations and serve as a basis for designing novel ways of mobile interaction. We implement a system based on these recommendations. The system supports developers in integrating wearable sensors and actuators by providing an easy to use API for accessing these devices. In conclusion, this thesis broadens the understanding of how garment-based wearable computing devices can enrich mobile interaction. It outlines challenges and opportunities on an interaction and technological level. The unique characteristics of smart garments make them a promising technology for making the next step in mobile interaction

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract

    Ways of walking: understanding walking's implications for the design of handheld technology via a humanistic ethnographic approach

    Get PDF
    It seems logical to argue that mobile computing technologies are intended for use “on-the-go.” However, on closer inspection, the use of mobile technologies pose a number of challenges for users who are mobile, particularly moving around on foot. In engaging with such mobile technologies and their envisaged development, we argue that interaction designers must increasingly consider a multitude of perspectives that relate to walking in order to frame design problems appropriately. In this paper, we consider a number of perspectives on walking, and we discuss how these may inspire the design of mobile technologies. Drawing on insights from non-representational theory, we develop a partial vocabulary with which to engage with qualities of pedestrian mobility, and we outline how taking more mindful approaches to walking may enrich and inform the design space of handheld technologies

    Conceptualizing and supporting awareness of collaborative argumentation

    Get PDF
    In this thesis, we introduce “Argue(a)ware”. This is a concept for an instructional group awareness tool which aims at supporting social interactions in co-located computer-supported collaborative argumentation settings. Argue(a)ware is designed to support the social interactions in the content (i.e., task-related) and in the relational (i.e., social and interpersonal) space of co-located collaborative argumentation (Barron, 2003). The support for social interactions in the content space of collaboration is facilitated with the use of collaborative scripts for argumentation (i.e., instructions and scaffolds of argument construction) as well with the use of an argument mapping tool (i.e., visualization of argumentation outcomes in a form of diagrams) (Stegmann, Weinberger, & Fischer, 2007; van Gelder, 2013). The support for social interactions in the relational space of collaboration is facilitated with the use of different awareness mechanisms from the CSCL and the CSCW research fields (i.e., monitoring, mirroring and awareness notification tools). In this thesis, we examined how different awareness mechanisms facilitate the regulation of collaborative processes in the relational space of collaborative argumentation. Moreover, we studied how they affect the perceived team effectiveness (i.e., process outcome) and group performance (i.e., learning outcome) in the content space of collaboration. Thereby, we studied also the effects of the design of the awareness mechanisms on the application of the mechanisms and the user experience with them. In line with the design-based research paradigm, we attempted to simultaneously improve and study the effect of Argue(a)ware on collaborative argumentation (Herrington, McKenney, Reeves & Oliver, 2007). Through a series of design-based research studies we tested and refined the prototypes of the instructional group awareness tool. Moreover, we studied the ecological validity of dominant awareness and instructional theories in the context of co-located computer-supported collaborative argumentation. The underlying premise of the Argue(a)ware tool is that a combination of awareness and instructional support will result in increased awareness of collaboration, which will, in turn, mediate the regulation of collaborative processes. Moreover, we assume that successful regulation of collaboration will result in high perceived team effectiveness and the group performance in turn. In the first phase of development of the Argue(a)ware tool, we built support of the content space of collaborative argumentation with argument scaffold elements in a pedagogical face-to-face macro-script and an argument mapping tool. Furthermore, we extended the use of the script for supporting the relational space of collaboration by embedding awareness prompts for reflecting on collaboration during regular breaks in the script. Following, we designed two variations of the same pedagogical face-to-face macro-script which differ with respect to the type of group awareness prompts they used for supporting the relational space of collaboration i.e. behavioral and social. Upon designing the two script variations, we conducted a longitudinal, multiple-case study with ten groups of Media Informatics master students (n = 28, in groups of three or two, group=case, 4 sessions x70 min, Behavioural Awareness Script group= 5, Social Awareness Script group =5.) where each group was conceptualized as a case. Students collaborated every time for arguing to solve one different ill-structured problem and for transferring their arguments in the argument mapping tool Rationale. Thereby, we intended to investigate the effects of different awareness prompts on (a) collaborative metacognitive processes i.e., regulation, reflection, and evaluation (b) the relation between collaborative metacognitive processes and the quality of collaborative argumentation as well as (c) the impact of the two script variations on perceived team effectiveness and (d) what was experience with the different parts of the script variations in the two groups and how this fits into the design framework by Buder (2011). The quantitative analysis of argument outcomes from the groups yield no significant difference between the groups that worked with the BAS and the SAS variations. No significant difference between the script variations with respect to the results from the team effectiveness questionnaires was found either. Prompts for regulating collaboration processes were found to be the most successfully and consistently applied ones, especially in the most successful cases from both script variations and have influenced the argumentation outcomes. The awareness prompts afforded an explicit feedback display format (e.g. assessment of participation levels of self- and others) through discussion (Buder, 2011). The prompted explicit feedback display format (i.e., ratings of one’s self and of others) was criticized for running only on subjective awareness information on participation, contribution efforts and performance in the role. This resulted in evaluation apprehension phenomena (Cottrell, 1972) and evaluation bias (i.e., users may have not assessed themselves or others frankly) (Ghadirian et al., 2016). The awareness prompts for reflection and evaluation did reveal frictions in the plan making process (i.e., dropping out of the plan for collaboration) in the least successful groups. Problems with group dynamics (i.e., free-loading and presence of dominance) but were not powerful enough to trigger the desired changes in the behaviors of the students. The prompts for evaluating the collaboration in both script variations had no apparent connection to argumentation outcomes. The results indicated that dominant presence phenomena inhibited substantive argumentation in the least successful groups. They also indicated that the role-assignment influenced the group dynamics by helping student’s making clear the labor division in the group. In the second phase of development of the Argue(a)ware tool, the focus is on structuring and regulating social interactions in the relational space of collaborative argumentation by means of scripted roles and role-based awareness scaffolds. We designed support for mirroring participation in the role (i.e., a role-based awareness visualization) and support for monitoring participation, coordination and collaboration efforts in the role (i.e., self-assessment questionnaire). Moreover, we designed additional support for guiding participation in the role i.e., role-based reminders as notifications on smartwatches. In a between-subjects study, ten groups of three university students each (n = 30, Mage =22y, mixed educational backgrounds, 1x90min) worked with two variants of the Argue(a)ware for arguing to solve one ill-structured problem and transferring their arguments in the argument mapping tool Rationale. Next, to that, students should monitor their progress in their role with the role-based awareness visualization and the self-assessment questionnaire with the basic awareness support (role-based awareness visualization with the intermediate self-assessment) and the enhanced awareness support (additional role-based awareness reminders). Half of the groups worked only with the role-based awareness visualization and the self-assessment questionnaire (Basic Awareness Condition-BAC) while the other half groups received additional text-based awareness notifications via smartwatches that were sent to students privately (Enhanced Awareness Condition- EAC). Thereby, we tested the use of different degrees of awareness support in the two conditions with respect to their impact on a) self-perceived awareness of performance in the role and of collaboration and coordination efforts (measured with the same questionnaire at two time points), b) on perceive team effectiveness, c) group performance. We hypothesized that students in EAC will perform better thanks to the additional awareness reminders that increased the directivity and influenced their awareness in the role. The mixed methods analysis revealed that the awareness reminders, when perceived on time, succeeded in guiding collaboration (i.e., resulted in more role-specific behaviors). Students in the EAC condition improved their awareness over time (between the two measurements). These results indicated that enhanced awareness support in the form of additional guidance through awareness reminders can boost the awareness of students’ performance in the role as well as the awareness of their coordination and collaboration efforts over time by directing them back to the mirroring and monitoring tools. Moreover, students in EAC exhibited higher perceived team effectiveness than the students in BAC. However, no significant differences in building of shared mental models or performing in mutual performance monitoring were found between the groups. However, students in BAC and EAC did not differ significantly with respect to the formal correctness or evidence sufficiency of their group argumentation outcomes. Moreover, technical difficulties with the smartphones used as delivery devices for the awareness reminders (i.e., low vibration modus) hindered the timely perception of the reminders and thus their effect on participation. Finally, the questionnaire on the experience with the different parts of Argue(a)ware system indicated the need for exploring further media for supporting the awareness reminders to avoid the overwhelming effects of the multiple displays of the system and enhancing higher perceptiveness of the reminders with low interruption costs for other group members. The rather high satisfaction with the use of the role-based awareness visualization and the positive comments on the motivating aspects of monitoring how the personal success contributes to the group performance indicate that the group mirror succeeded in making group norms visible to group members in a non-obtrusive way. The high interpersonal comparability of performances without moderating the group ‘s interaction directly in the basic awareness condition was proven to be the favored design approach compared to the combination of group mirror and awareness reminders in the enhance awareness condition. In the third phase of development of Argue(a)ware, we focused on designing and testing different notification modes on different ubiquitous mobile devices for facilitating the next prototype of a notification system for role-based awareness reminders. Thereby, the aim of the system was again to guide students’ active participation in collaborative argumentation. More specifically, we focused on raising students’ attention to the reminders and triggering a prompter reaction to the contents of the reminders whilst avoiding a high interruption cost for the primary task (i.e., arguing for solving the problem at hand) in the group. These goals were translated into design challenges for the design of the role-based awareness notification system. The system should afford low interruptions, high reaction and high comprehension of notifications. Notification systems with this particular configuration of IRC values are known as "secondary display" systems (McCrickard et al., 2003). Next, we designed three low-fidelity prototypes for a role-based notification system for delivering awareness reminders: The first ran on a smartwatch and afforded text-based information with vibration and light notification modalities. The second ran on smartphone and afforded text-based information with vibrotactile and light-based notification modalities. Finally, the third prototype run on a smart-ring which afforded graphical- based (i.e. abstract light) information with and light and vibration notification modalities. To test the suitability of these prototypes for acting as “secondary display” systems, we conducted a within-subjects user study where three university students (n= 3, Mage=28, mixed educational background) argued for solving three different problem cases and producing an argument map in each of the three consecutive meetings (max 90min) in the Argue(a)ware instructional system. Students were assigned the roles of writer, corrector and devil`s advocate and were instructed to maintain the same role across the three meetings. In each meeting, students worked with a different role-based awareness notification prototype, where they received a notification indicating their balloon is not growing bigger after five minutes of not exhibiting any role-specific behaviors. The role-based awareness notification prototypes aimed at introducing timely interventions which would prompt students to check on their own progress in the role and the group progress as visualized by the role-based awareness visualization on the large display. Ultimately, this should prompt them to reflect on the awareness information from the visualization and adapt their behaviors to the desired behavior standards over time. Results showed that students perceived the notifications from all media mostly based on vibration cues. Thereby, the vibration cues on the wrist (smartwatch) were considered the least disruptive to the main task compared to the vibration cues on finger (smartwatch) and the vibration cues on the desk (smartphone). Students also declared that vibration cues on wrist prompted the fastest reaction i.e., attending to notification by interacting with the smartwatch. These results indicate that vibration cues on the wrist can be a suitable notification mechanism for increasing the perceived urgency of the message and prompting the reaction on it without causing great distraction to the main task, as studies previous studies showed before (Pielot, Church, & deOliveira, 2013; Hernández-Leo, Balestrini, Nieves & Blat, 2012). Based on very limited qualitative data on light as notification modality and awareness representation type no inferences could be made about its influence on the cost of interruption, reaction and comprehension parameters comprehensiveness. The qualitative and quantitative data on the experience with different media as awareness notification systems indicate that smartwatches may be the most suitable medium for acting as awareness notification medium with a “secondary display” IRC configuration (low-high-high). However, this inference needs to be tested in terms of a follow up study. In the next study, the great limitations of study (limited data due to low power and mal-structured measurement instruments) need to be repaired. Finally, the focus should be on comparing notification modalities of one medium (e.g., smartphone) based on a larger set of participants and with the use of objective measurements for the IRC parameter values (Chewar, McCrickard & Sutcliffe, 2004). Finally, we draw conclusions based on the findings from the three studies with respect to the role of awareness mechanisms for facilitating collaborative processes and outcomes and provide replicable and generalizable design principles. These principles are formed as heuristic statements and are subject to refinement by further research (Bell, Hoadley, & Linn, 2004; Van den Akker, 1999). We conclude with the limitations of the study and ideas for future work with Argue(a)ware
    corecore