2,094 research outputs found

    Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling

    Get PDF
    In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Vulnerability analysis of cyber-behavioral biometric authentication

    Get PDF
    Research on cyber-behavioral biometric authentication has traditionally assumed naĂŻve (or zero-effort) impostors who make no attempt to generate sophisticated forgeries of biometric samples. Given the plethora of adversarial technologies on the Internet, it is questionable as to whether the zero-effort threat model provides a realistic estimate of how these authentication systems would perform in the wake of adversity. To better evaluate the efficiency of these authentication systems, there is need for research on algorithmic attacks which simulate the state-of-the-art threats. To tackle this problem, we took the case of keystroke and touch-based authentication and developed a new family of algorithmic attacks which leverage the intrinsic instability and variability exhibited by users\u27 behavioral biometric patterns. For both fixed-text (or password-based) keystroke and continuous touch-based authentication, we: 1) Used a wide range of pattern analysis and statistical techniques to examine large repositories of biometrics data for weaknesses that could be exploited by adversaries to break these systems, 2) Designed algorithmic attacks whose mechanisms hinge around the discovered weaknesses, and 3) Rigorously analyzed the impact of the attacks on the best verification algorithms in the respective research domains. When launched against three high performance password-based keystroke verification systems, our attacks increased the mean Equal Error Rates (EERs) of the systems by between 28.6% and 84.4% relative to the traditional zero-effort attack. For the touch-based authentication system, the attacks performed even better, as they increased the system\u27s mean EER by between 338.8% and 1535.6% depending on parameters such as the failure-to-enroll threshold and the type of touch gesture subjected to attack. For both keystroke and touch-based authentication, we found that there was a small proportion of users who saw considerably greater performance degradation than others as a result of the attack. There was also a sub-set of users who were completely immune to the attacks. Our work exposes a previously unexplored weakness of keystroke and touch-based authentication and opens the door to the design of behavioral biometric systems which are resistant to statistical attacks

    Eignung von virtueller Physik und Touch-Gesten in Touchscreen-Benutzerschnittstellen fĂŒr kritische Aufgaben

    Get PDF
    The goal of this reasearch was to examine if modern touch screen interaction concepts that are established on consumer electronic devices like smartphones can be used in time-critical and safety-critical use cases like for machine control or healthcare appliances. Several prevalent interaction concepts with and without touch gestures and virtual physics were tested experimentally in common use cases to assess their efficiency, error rate and user satisfaction during task completion. Based on the results, design recommendations for list scrolling and horizontal dialog navigation are given.Das Ziel dieser Forschungsarbeit war es zu untersuchen, ob moderne Touchscreen-Interaktionskonzepte, die auf Consumer-Electronic-GerĂ€ten wie Smartphones etabliert sind, fĂŒr zeit- und sicherheitskritische AnwendungsfĂ€lle wie Maschinensteuerung und MedizingerĂ€te geeignet sind. Mehrere gebrĂ€uchliche Interaktionskonzepte mit und ohne Touch-Gesten und virtueller Physik wurden in hĂ€ufigen AnwendungsfĂ€llen experimentell auf ihre Effizienz, Fehlerrate und Nutzerzufriedenheit bei der Aufgabenlösung untersucht. Basierend auf den Resultaten werden Empfehlungen fĂŒr das Scrollen in Listen und dem horizontalen Navigieren in mehrseitigen Software-Dialogen ausgesprochen

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Using pressure input and thermal feedback to broaden haptic interaction with mobile devices

    Get PDF
    Pressure input and thermal feedback are two under-researched aspects of touch in mobile human-computer interfaces. Pressure input could provide a wide, expressive range of continuous input for mobile devices. Thermal stimulation could provide an alternative means of conveying information non-visually. This thesis research investigated 1) how accurate pressure-based input on mobile devices could be when the user was walking and provided with only audio feedback and 2) what forms of thermal stimulation are both salient and comfortable and so could be used to design structured thermal feedback for conveying multi-dimensional information. The first experiment tested control of pressure on a mobile device when sitting and using audio feedback. Targeting accuracy was >= 85% when maintaining 4-6 levels of pressure across 3.5 Newtons, using only audio feedback and a Dwell selection technique. Two further experiments tested control of pressure-based input when walking and found accuracy was very high (>= 97%) even when walking and using only audio feedback, when using a rate-based input method. A fourth experiment tested how well each digit of one hand could apply pressure to a mobile phone individually and in combination with others. Each digit could apply pressure highly accurately, but not equally so, while some performed better in combination than alone. 2- or 3-digit combinations were more precise than 4- or 5-digit combinations. Experiment 5 compared one-handed, multi-digit pressure input using all 5 digits to traditional two-handed multitouch gestures for a combined zooming and rotating map task. Results showed comparable performance, with multitouch being ~1% more accurate but pressure input being ~0.5sec faster, overall. Two experiments, one when sitting indoors and one when walking indoors tested how salient and subjectively comfortable/intense various forms of thermal stimulation were. Faster or larger changes were more salient, faster to detect and less comfortable and cold changes were more salient and faster to detect than warm changes. The two final studies designed two-dimensional structured ‘thermal icons’ that could convey two pieces of information. When indoors, icons were correctly identified with 83% accuracy. When outdoors, accuracy dropped to 69% when sitting and 61% when walking. This thesis provides the first detailed study of how precisely pressure can be applied to mobile devices when walking and provided with audio feedback and the first systematic study of how to design thermal feedback for interaction with mobile devices in mobile environments

    Designing touch-enabled electronic flight bags in SAR helicopter operations

    Get PDF
    In order to benefit from potential reduced operational costs and crew workload airlines are increasingly interested in touchscreen-based Electronic Flight Bags (EFB). This paper focuses on the specific domain of Search and Rescue (SAR) Helicopters. A first set of results aiming to explore and understand potential benefits and challenges of an EFB in a SAR environment will be presented. A review of related work, operational observations and interviews with pilots were conducted to understand and specify the use context. Digital Human Modelling (DHM) software was used to determine physical constraints of an EFB in this type of flight deck. A scenario was developed which will be used in future to define features, content and functionality that a SAR pilot may wish to see in an EFB. Developed initial interface design guidelines are presented

    A musculoskeletal model of the human hand to improve human-device interaction

    Get PDF
    abstract: Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for performance differences are often difficult to identify. For example, many patterns of muscle activity can potentially result in similar behavioral output. Muscle activity is one factor contributing to forces in tissues that could contribute to injury. However, experimental measurements of muscle activity and force for humans are extremely challenging. Models of the musculoskeletal system can be used to make specific estimates of neuromuscular coordination and musculoskeletal forces. However, existing models cannot easily be used to describe complex, multi-finger gestures such as those used for multi-touch human computer interaction (HCI) tasks. We therefore seek to develop a dynamic musculoskeletal simulation capable of estimating internal musculoskeletal loading during multi-touch tasks involving multi digits of the hand, and use the simulation to better understand complex multi-touch and gestural movements, and potentially guide the design of technologies the reduce injury risk. To accomplish these, we focused on three specific tasks. First, we aimed at determining the optimal index finger muscle attachment points within the context of the established, validated OpenSim arm model using measured moment arm data taken from the literature. Second, we aimed at deriving moment arm values from experimentally-measured muscle attachments and using these values to determine muscle-tendon paths for both extrinsic and intrinsic muscles of middle, ring and little fingers. Finally, we aimed at exploring differences in hand muscle activation patterns during zooming and rotating tasks on the tablet computer in twelve subjects. Towards this end, our musculoskeletal hand model will help better address the neuromuscular coordination, safe gesture performance and internal loadings for multi-touch applications.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201

    Neural and motor basis of inter-individual interactions

    Get PDF
    The goal of my Ph.D. work was to investigate the behavioral markers and the brain activities responsible for the emergence of sensorimotor communication. Sensorimotor communication can be defined as a form of communication consisting into flexible exchanges based on bodily signals, in order to increase the efficiency of the inter-individual coordination. For instance, a soccer player carving his movements to inform another player about his intention. This form of interaction is highly dependent of the motor system and the ability to produce appropriate movements but also of the ability of the partner to decode these cues. To tackle these facets of human social interaction, we approached the complexity of the problem by splitting my research activities into two separate lines of research. First, we pursued the examination of motor-based humans\u2019 capability to perceive and \u201cread\u201d other\u2019s behaviors in focusing on single-subject experiment. The discovery of mirror neurons in monkey premotor cortex in the early nineties (di Pellegrino et al. 1992) motivated a number of human studies on this topic (Rizzolatti and Craighero 2004). The critical finding was that some ventral premotor neurons are engaged during visual presentation of actions performed by conspecifics. More importantly, those neurons were shown to encode also the actual execution of similar actions (i.e. irrespective of who the acting individual is). This phenomenon has been highly investigated in humans by using cortical and cortico-spinal measures (for review see, fMRI: Molenberghs, Cunnington, and Mattingley 2012; TMS: Naish et al. 2014; EEG: Pineda 2008). During single pulse TMS (over the primary motor cortex), the amplitude of motor evoked potentials (MEPs) provides an index of corticospinal recruitment. During action observation the modulation of this index follow the expected changes during action execution (Fadiga et al. 1995). However, dozens of studies have been published on this topic and revealed important inconsistencies. For instance, MEPs has been shown to be dependent on observed low-level motor features (e.g. kinematic features or electromyography temporal coupling; Gangitano, Mottaghy, and Pascual-Leone 2001; Borroni et al. 2005; Cavallo et al. 2012) as well as high level movement properties (e.g. action goals; Cattaneo et al. 2009; Cattaneo et al. 2013). Furthermore, MEPs modulations do not seem to be related to the observed effectors (Borroni and Baldissera 2008; Finisguerra et al. 2015; Senna, Bolognini, and Maravita 2014), suggesting their independence from low-level movement features. These contradictions call for new paradigms. Our starting hypothesis here is that the organization and function of the mirror mechanism should follow that of the motor system during action execution. Hence, we derived three action observation protocols from classical motor control theories: 1) The first study was motivated by the fact that motor redundancy in action execution do not allow the presence of a one-to-one mapping between (single) muscle activation and action goals. Based on that, we showed that the effect of action observation (observation of an actor performing a power versus a precision grasp) are variable at the single muscle level (MEPs; motor evoked potentials) but robust when evaluating the kinematic of TMS-evoked movements. Considering that movements are based on the coordination of multiple muscle activations (muscular synergies), MEPs may represent a partial picture of the real corticospinal activation. Inversely, movement kinematics is both the final functional byproduct of muscles coordination and the sole visual feedback that can be extracted from action observation (i.e. muscle recruitment is not visible). We conclude that TMS-evoked kinematics may be more reliable in representing the state of the motor system during action observation. 2) In the second study, we exploited the inter-subject variability inherent to everyday whole-body human actions, to evaluate the link between individual motor signatures (or motor styles) and other\u2019s action perception. We showed no group-level effect but a robust correlation between the individual motor signature recorded during action execution and the subsequent modulations of corticospinal excitability during action observation. However, results were at odds with a strict version of the direct matching hypothesis that would suggest the opposite pattern. In fact, the more the actor\u2019s movement was similar to the observer\u2019s individual motor signature, the smaller was the MEPs amplitude, and vice versa. These results conform to the predictive coding hypothesis, suggesting that during AO, the motor system compares our own way of doing the action (individual motor signature) with the action displayed on the screen (actor\u2019s movement). 3) In the third study, we investigated the neural mechanisms underlying the visual perception of action mistakes. According to a strict version of the direct matching hypothesis, the observer should potentially reproduce the neural activation present during the actual execution of action errors (van Schie et al. 2004). Here, instead of observing an increase of cortical inhibition, we showed an early (120 ms) decrease of intracortical inhibition (short intracortical inhibition) when a mismatch was present between the observed action (erroneous) and the observer\u2019s expectation. As proposed by the predictive coding framework, the motor system may be involved in the generation of an error signal potentially relying on an early decrease of intracortical inhibition within the corticomotor system. The second line of research aimed at the investigation of how sensorimotor communication flows between agents engaged in a complementary action coordination task. In this regard, measures of interest where related to muscle activity and/or kinematics as the recording of TMS-related indexes would be too complicated in a joint-action scenario. 1) In the first study, we exploited the known phenomenon of Anticipatory Postural Adjustments (APAs). APAs refers to postural adjustments made in anticipation of a self- or externally-generated disturbance in order to cope for the predicted perturbation and stabilize the current posture. Here we examined how observing someone else lifting an object we hold can affect our own anticipatory postural adjustments of the arm. We showed that the visual information alone (joint action condition), in the absence of efference copy (present only when the subject is unloading by himself the object situated on his hand), were not sufficient to fully deploy the needed anticipatory muscular activations. Rather, action observation elicited a dampened APA response that is later augmented by the arrival of tactile congruent feedback. 2) In a second study, we recorded the kinematic of orchestra musicians (one conductor and two lines of violinists). A manipulation was added to perturb the normal flow of information conveyed by the visual channel. The first line of violinist where rotated 180\ub0, and thus faced the second line. Several techniques were used to extract inter-group (Granger Causality method) and intra-group synchronization (PCA for musicians and autoregression for conductors). The analyses were directed to two kinematic features, hand and head movements, which are central for functionally different action. The hand is essential for instrumental actions, whereas head movements encode ancillary expressive actions. During the perturbation, we observed a complete reshaping of the whole patterns of communication going in the direction of a distribution of the leadership between conductor and violinists, especially for what regards head movements. In fact, in the perturbed condition, the second line acts as an informational hub connecting the first line to the conductor they no longer can see. This study evidences different forms of communications (coordination versus synchronization) flowing via different channels (ancillary versus instrumental) with different time-scales
    • 

    corecore