26 research outputs found

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device

    How do mobile devices support clinical work on hospital wards: an investigation of the selection and use of computing devices

    Full text link
    The mobile and information intensive nature of clinical work in hospital settings presents a critical challenge: how to provide clinicians with access to information at the time and place of need? This challenge is particularly pertinent to decision-makers responsible for the selection of computing devices. Mobile devices are often promoted as a means to meet this challenge, with existing literature tending to portray the mobility of devices as inherently beneficial. However, evidence to clearly demonstrate how mobile devices support clinical work is limited. This research aimed to generate new knowledge to contribute to answering two significant questions: (i) how do decision-makers select computing devices? and (ii) how do mobile devices support clinical work practices? The research was conducted in two stages. In stage one, interviews were conducted with 28 individuals involved in decisions regarding the selection of computing devices for hospital wards. Decision-makers reported a range of factors that influenced device selection. Role of the user, types of tasks, and location of tasks, for example, were deemed important. In stage two, a mixed methods design comprising structured observations, interviews, and field notes was employed. A sample of 38 clinicians, on two wards of a metropolitan hospital, was observed for 90 hours. In total 4,423 clinical tasks were recorded, capturing key information about tasks doctors and nurses undertake, where, and devices used. The findings provide evidence validating core assumptions about mobile devices: namely, that they support clinicians work by facilitating access to information at patients bedsides. Notably, mobile devices also supported work away from the bedside and whilst clinicians were in transit, allowing continuity in work processes. However, mobile devices did not provide the best fit for all tasks and additional factors, such as the temporal rhythms of the ward and structure of ward round teams, affected how mobile devices supported work. Integration of findings from the two stages resulted in the development of a detailed list of factors that influence the use of mobile devices on hospital wards. This new evidence provides valuable knowledge to guide the selection of computing devices to support, and potentially optimise, clinical work

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Collaboration in Co-located Automotive Applications

    Get PDF
    Virtual reality systems offer substantial potential in supporting decision processes based purely on computer-based representations and simulations. The automotive industry is a prime application domain for such technology, since almost all product parts are available as three-dimensional models. The consideration of ergonomic aspects during assembly tasks, the evaluation of humanmachine interfaces in the car interior, design decision meetings as well as customer presentations serve as but a few examples, wherein the benefit of virtual reality technology is obvious. All these tasks require the involvement of a group of people with different expertises. However, current stereoscopic display systems only provide correct 3D-images for a single user, while other users see a more or less distorted virtual model. This is a major reason why these systems still face limited acceptance in the automotive industry. They need to be operated by experts, who have an advanced understanding of the particular interaction techniques and are aware of the limitations and shortcomings of virtual reality technology. The central idea of this thesis is to investigate the utility of stereoscopic multi-user systems for various stages of the car development process. Such systems provide multiple users with individual and perspectively correct stereoscopic images, which are key features and serve as the premise for the appropriate support of collaborative group processes. The focus of the research is on questions related to various aspects of collaboration in multi-viewer systems such as verbal communication, deictic reference, embodiments and collaborative interaction techniques. The results of this endeavor provide scientific evidence that multi-viewer systems improve the usability of VR-applications for various automotive scenarios, wherein co-located group discussions are necessary. The thesis identifies and discusses the requirements for these scenarios as well as the limitations of applying multi-viewer technology in this context. A particularly important gesture in real-world group discussions is referencing an object by pointing with the hand and the accuracy which can be expected in VR is made evident. A novel two-user seating buck is introduced for the evaluation of ergonomics in a car interior and the requirements on avatar representations for users sitting in a car are identified. Collaborative assembly tasks require high precision. The novel concept of a two-user prop significantly increases the quality of such a simulation in a virtual environment and allows ergonomists to study the strain on workers during an assembly sequence. These findings contribute toward an increased acceptance of VR-technology for collaborative development meetings in the automotive industry and other domains.Virtual-Reality-Systeme sind ein innovatives Instrument, um mit Hilfe computerbasierter Simulationen Entscheidungsprozesse zu unterstĂŒtzen. Insbesondere in der Automobilbranche spielt diese Technologie eine wichtige Rolle, da heutzutage nahezu alle Fahrzeugteile in 3D konstruiert werden. Im Entwicklungsbereich der Automobilindustrie werden Visualisierungssysteme darĂŒber hinaus bei der Untersuchung ergonomischer Aspekte von MontagevorgĂ€ngen, bei der Bewertung der Mensch-Maschine-Schnittstelle im Fahrzeuginterieur, bei Designentscheidungen sowie bei KundenprĂ€sentationen eingesetzt. Diese Entscheidungsrunden bedĂŒrfen der Einbindung mehrerer Experten verschiedener Fachbereiche. Derzeit verfĂŒgbare stereoskopische Visualisierungssysteme ermöglichen aber nur einem Nutzer eine korrekte Stereosicht, wĂ€hrend sich fĂŒr die anderen Teilnehmer das 3D-Modell verzerrt darstellt. Dieser Nachteil ist ein wesentlicher Grund dafĂŒr, dass derartige Systeme bisher nur begrenzt im Automobilbereich anwendbar sind. Der Fokus dieser Dissertation liegt auf der Untersuchung der Anwendbarkeit stereoskopischer Mehrbenutzer-Systeme in verschiedenen Stadien des automobilen Entwicklungsprozesses. Derartige Systeme ermöglichen mehreren Nutzern gleichzeitig eine korrekte Stereosicht, was eine wesentliche Voraussetzung fĂŒr die Zusammenarbeit in einer Gruppe darstellt. Die zentralen Forschungsfragen beziehen sich dabei auf die Anforderungen von kooperativen Entscheidungsprozessen sowie den daraus resultierenden Aspekten der Interaktion wie verbale Kommunikation, Gesten sowie virtuelle Menschmodelle und Interaktionstechniken zwischen den Nutzern. Die Arbeit belegt, dass stereoskopische Mehrbenutzersysteme die Anwendbarkeit virtueller Techniken im Automobilbereich entscheidend verbessern, da sie eine natĂŒrliche Kommunikation zwischen den Nutzern fördern. So ist die UnterstĂŒtzung natĂŒrlicher Gesten beispielsweise ein wichtiger Faktor und es wird dargelegt, welche Genauigkeit beim Zeigen mit der realen Hand auf virtuelle Objekte erwartet werden kann. DarĂŒber hinaus werden Anforderungen an virtuelle Menschmodelle anhand einer Zweibenutzer-Sitzkiste identifiziert und untersucht. Diese Form der Simulation, bei der die Nutzer nebeneinander in einem Fahrzeugmodell sitzen, dient vor allem der Bewertung von Mensch-Maschine-Schnittstellen im Fahrzeuginterieur. Des Weiteren wird das neue Konzept eines Mehrbenutzer-Werkzeugs in die Arbeit mit einbezogen, da hier verdeutlicht wird wie die Simulation von MontagevorgĂ€ngen in virtuellen Umgebungen mit passivem haptischem Feedback zu ergonomischen Verbesserungen entsprechender ArbeitsvorgĂ€nge in der RealitĂ€t beitragen kann. Diese Konzepte veranschaulichen wie VR-Systeme zur UnterstĂŒtzung kollaborativer Prozesse in der Automobilbranche und darĂŒber hinaus eingesetzt werden können

    AN ENACTIVE APPROACH TO TECHNOLOGICALLY MEDIATED LEARNING THROUGH PLAY

    Get PDF
    This thesis investigated the application of enactive principles to the design of classroom technolo- gies for young children’s learning through play. This study identified the attributes of an enactive pedagogy, in order to develop a design framework to accommodate enactive learning processes. From an enactive perspective, the learner is defined as an autonomous agent, capable of adapta- tion via the recursive consumption of self generated meaning within the constraints of a social and material world. Adaptation is the parallel development of mind and body that occurs through inter- action, which renders knowledge contingent on the environment from which it emerged. Parallel development means that action and perception in learning are as critical as thinking. An enactive approach to design therefore aspires to make the physical and social interaction with technology meaningful to the learning objective, rather than an aside to cognitive tasks. The design framework considered in detail the necessary affordances in terms of interaction, activity and context. In a further interpretation of enactive principles, this thesis recognised play and pretence as vehicles for designing and evaluating enactive learning and the embodied use of technology. In answering the research question, the interpreted framework was applied as a novel approach to designing and analysing children’s engagement with technology for learning, and worked towards a paradigm where interaction is part of the learning experience. The aspiration for the framework was to inform the design of interaction modalities to allow users’ to exercise the inherent mechanisms they have for making sense of the world. However, before making the claim to support enactive learning processes, there was a question as to whether technologically mediated realities were suitable environments to apply this framework. Given the emphasis on the physical world and action, it was the intention of the research and design activities to explore whether digital artefacts and spaces were an impoverished reality for enactive learning; or if digital objects and spaces could afford sufficient ’reality’ to be referents in social play behaviours. The project embedded in this research was tasked with creating deployable technologies that could be used in the classroom. Consequently, this framework was applied in practice, whereby the design practice and deployed technologies served as pragmatic tools to investigate the potential for interactive technologies in children’s physical, social and cognitive learning. To understand the context, underpin the design framework, and evaluate the impact of any techno- logical interventions in school life, the design practice was informed by ethnographic methodologies. The design process responded to cascading findings from phased research activities. The initial fieldwork located meaning making activities within the classroom, with a view to to re-appropriating situated and familiar practices. In the next stage of the design practice, this formative analysis determined the objectives of the participatory sessions, which in turn contributed to the creation of technologies suitable for an inquiry of enactive learning. The final technologies used standard school equipment with bespoke software, enabling children to engage with real time compositing and tracking applications installed in the classrooms’ role play spaces. The evaluation of the play space technologies in the wild revealed under certain conditions, there was evidence of embodied presence in the children’s social, physical and affective behaviour - illustrating how mediated realities can extend physical spaces. These findings suggest that the attention to meaningful interaction, a presence in the environment as a result of an active role, and a social presence - as outlined in the design framework - can lead to the emergence of observable enactive learning processes. As the design framework was applied, these principles could be examined and revised. Two notable examples of revisions to the design framework, in light of the applied practice, related to: (1) a key affordance for meaningful action to emerge required opportunities for direct and immediate engagement; and (2) a situated awareness of the self and other inhabitants in the mediated space required support across the spectrum of social interaction. The application of the design framework enabled this investigation to move beyond a theoretical discourse

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilitĂ©s : assister le chirurgien, rĂ©aliser des prototypes de piĂšces industrielles, simuler des phĂ©nomĂšnes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la rĂ©alitĂ© virtuelle aspire Ă  -littĂ©ralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numĂ©rique et percevoir les effets de ses actions au travers de diffĂ©rents retours sensoriels. Permettre une vĂ©ritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en rĂ©alitĂ© virtuelle Ă  des dĂ©fis importants: les gestes de l'utilisateur doivent ĂȘtre capturĂ©s puis directement transmis au monde virtuel afin de le modifier en temps-rĂ©el. Les retours sensoriels ne sont pas uniquement visuels mais doivent ĂȘtre combinĂ©s avec les retours auditifs ou haptiques dans une rĂ©ponse globale multimodale. L'objectif principal de mes activitĂ©s de recherche consiste Ă  amĂ©liorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les diffĂ©rentes modalitĂ©s sensorielles. Dans mes travaux, je m'intĂ©resse tout particuliĂšrement Ă  concevoir des interactions avec des mondes virtuels complexes. Mon approche peut ĂȘtre dĂ©crite au travers de trois axes principaux de recherche: (1) la modĂ©lisation dans les mondes virtuels d'environnements physiques plausibles oĂč les objets rĂ©agissent de maniĂšre naturelle, mĂȘme lorsque leur topologie est modifiĂ©e ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intĂ©grant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tĂȘte, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les diffĂ©rentes contributions que j'ai proposĂ©es dans chacun de ces trois axes peuvent ĂȘtre regroupĂ©es au sein d'un cadre plus gĂ©nĂ©ral englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en rĂ©alitĂ© virtuelle mais Ă©galement plus gĂ©nĂ©ralement dans d'autres domaines tels que la simulation mĂ©dicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilitĂ©s : assister le chirurgien, rĂ©aliser des prototypes de piĂšces industrielles, simuler des phĂ©nomĂšnes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la rĂ©alitĂ© virtuelle aspire Ă  -littĂ©ralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numĂ©rique et percevoir les effets de ses actions au travers de diffĂ©rents retours sensoriels. Permettre une vĂ©ritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en rĂ©alitĂ© virtuelle Ă  des dĂ©fis importants: les gestes de l'utilisateur doivent ĂȘtre capturĂ©s puis directement transmis au monde virtuel afin de le modifier en temps-rĂ©el. Les retours sensoriels ne sont pas uniquement visuels mais doivent ĂȘtre combinĂ©s avec les retours auditifs ou haptiques dans une rĂ©ponse globale multimodale. L'objectif principal de mes activitĂ©s de recherche consiste Ă  amĂ©liorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les diffĂ©rentes modalitĂ©s sensorielles. Dans mes travaux, je m'intĂ©resse tout particuliĂšrement Ă  concevoir des interactions avec des mondes virtuels complexes. Mon approche peut ĂȘtre dĂ©crite au travers de trois axes principaux de recherche: (1) la modĂ©lisation dans les mondes virtuels d'environnements physiques plausibles oĂč les objets rĂ©agissent de maniĂšre naturelle, mĂȘme lorsque leur topologie est modifiĂ©e ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intĂ©grant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tĂȘte, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les diffĂ©rentes contributions que j'ai proposĂ©es dans chacun de ces trois axes peuvent ĂȘtre regroupĂ©es au sein d'un cadre plus gĂ©nĂ©ral englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en rĂ©alitĂ© virtuelle mais Ă©galement plus gĂ©nĂ©ralement dans d'autres domaines tels que la simulation mĂ©dicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    The design, development and evaluation of cross-platform mobile applications and services supporting social accountability monitoring

    Get PDF
    Local government processes require meaningful and effective participation from both citizens and their governments in order to remain truly democratic. This project investigates the use of mobile phones as a tool for supporting this participation. MobiSAM, a system which aims to enhance the Social Accountability Monitoring (SAM) methodology at local government level, has been designed and implemented. The research presented in this thesis examines tools and techniques for the development of cross-platform client applications, allowing access to the MobiSAM service, across heterogeneous mobile platforms, handsets and interaction styles. Particular attention is paid to providing an easily navigated user interface (UI), as well as offering clear and concise visualisation capabilities. Depending on the host device, interactivity is also included within these visualisations, potentially helping provide further insight into the visualised data. Guided by the results obtained from a comprehensive baseline study of the Grahamstown area, steps are taken in an attempt to lower the barrier of entry to using the MobiSAM service, potentially maximising its market reach. These include extending client application support to all identified mobile platforms (including feature phones); providing multi-language UIs (in English, isiXhosa and Afrikaans); as well as ensuring client application data usage is kept to a minimum. The particular strengths of a given device are also leveraged, such as its camera capabilities and built-in Global Positioning System (GPS) module, potentially allowing for more effective engagement with local municipalities. Additionally, a Short Message Service (SMS) gateway is developed, allowing all Global System for Mobile Communications (GSM) compatible handsets access to the MobiSAM service via traditional SMS. Following an iterative, user-centred design process, a thorough evaluation of the client application is also performed, in an attempt to gather feedback relating to the navigation and visualisation capabilities. The results of which are used to further refine its design. A comparative usability evaluation using two different versions of the cross-platform client application is also undertaken, highlighting the perceived memorability, learnabilitv and satisfaction of each. Results from the evaluation reveals which version of the client application is to be deployed during future pilot studies

    Kinematics and control of precision grip grasping

    Get PDF
    This thesis is about the kind of signals used in our central nervous system for guiding skilled motor behavior. In the first two projects a currently very influential theory on the flow of visual information inside our brain was tested. According to A. D. Milner and Goodale (1995) there exist two largely independent visual streams. The dorsal stream is supposed to transmit visual information for the guidance of action. The ventral stream is thought generate a conscious percept of the environment. The streams are said to use different parts of the visual information and to differ in temporal characteristics. Namely, the dorsal stream is proposed to have a lower sensitivity for color and a more rapid decay of information than the ventral stream. In the first project the role of chromatic information in action guidance was probed. We let participants grasp colored stimuli which varied in luminance. Criti- cally, some of these stimuli were completely isoluminant with the background. These stimuli thus could only be discriminated from their surrounding by means of chro- matic contrast, a poor input signal for the dorsal stream. Nevertheless, our partici- pants were perfectly able to guide their grip to these targets as well. In the second project the temporal characteristics of the two streams were probed. For a certain group of neurological patients it has been argued that they are able to switch from dorsal to ventral control when visual information is re- moved. These optic ataxic patients are normally quite bad at executing visually guided movements like e.g. pointing or grasping. Different researchers, however, demonstrated that their accuracy does improve when there is a delay between tar- get presentation and movement execution. Using different delay times and pointing movements Himmelbach and Karnath (2005) had shown that this improvement in- creases linearly with longer delay. We aimed at a replication of this result and a generalization to precision grip movements. Our results from two patients, however, did not show any improvement in grasping due to longer delay time. In pointing an effect was found only in one of the patients and only in one of several measures of pointing accuracy. Taken together the results of the first two projects donÂŽt support the idea of two independent visual streams and are more in line with the idea of a single visual representation of target objects. The third project aimed at closing a gap in existing model approaches on pre- cision grip kinematics. The available models need the target points of a movement as an input on which they can operate. From the literature on human and robotic grasping we extracted the most plausible set of rules for grasp point selection. We created objects suitable to put these rules into conflict with each other. Thereby we estimated the individual contribution of each rule. We validated the model by predicting grasp points on a completely novel set of objects. Our straightforward approach showed a very good performance in predicting the preferred contact points of human actors.Diese Dissertation handelt von den Mechanismen mit denen unser Zentralnerven- system menschliche Feinmotorik koordiniert. Gegenstand der ersten beiden Projekte ist die Theorie von A. D. Milner und Goodale (1995). Laut diesen Autoren gibt es im visuellen System zwei unabhĂ€ngige Verarbeitungspfade. Der dorsale Pfad verarbeitet visuelle Information zum Zweck der Handlungssteuerung. Der ventrale Pfad vermittelt bewusste visuelle Wahrnehmung. Beide Pfade verfĂŒgen uber teils unterschiedliche Anteile der gesamten visuellen Information. So soll der dorsale Pfad gegenĂŒber dem ventralen zum Beispiel durch geringere FarbsensitivitĂ€t sowie einen schnelleren Zerfall der Information gekennzeichnet sein. Im ersten Projekt wurde die Eignung von Farbinformation zur Handlungskontrolle getestet. Teilnehmer der Studie griffen nach farbigen Stimuli deren Helligkeit variiert wurde. Einige der Stimuli hatten die gleiche Helligkeit wie der Hintergrund vor dem sie prĂ€sentiert wurden. Diese Stimuli hoben sich also nur durch ihre Farbe vom Hintergrund ab. Trotz der angenommenen FarbinsensitivitĂ€t des dorsalen Pfades konnten unsere Teilnehmer auch diese Stimuli problemlos greifen. Gegenstand des zweiten Projektes waren die Unterschiede beider Pfade im zeitlichen Verfall der visuellen Information. Einigen Patienten mit speziellen Hirn- schĂ€digungen soll es möglich sein zwischen den ReprĂ€sentationen beider Pfade zu wechseln. Diese optischen Ataktiker zeigen starke Unsicherheit bei visuell gefĂŒhrten Bewegungen wie Zeigen oder Greifen. Wiederholt wurde jedoch gezeigt, dass ihre Bewegungen genauer werden wenn die AusfĂŒhrung einige Zeit nach der ZielprĂ€sentation erfolgt. Himmelbach und Karnath (2005) berichten, dass diese Verbesserung beim Zeigen linear mit der LĂ€nge des zwischengeschalteten Intervalles zunimmt. Wir versuchten dieses Ergebnis zu reproduzieren und auf das Greifen zu generalisieren. Die zwei von uns gemessenen Patienten zeigten beim Greifen jedoch keinen Effekt. Beim Zeigen zeigte sich eine Verbesserung nur bei einem Patienten und nur in einem von mehreren Maßen fĂŒr die Zeigegenauigkeit. Insgesamt betrachtet widersprechen die Ergebnisse des ersten und zweiten Projektes der Vorstellung zweier getrennter visueller Pfade. Die hier prĂ€sentierten Daten lassen sich ebenso effektiv, aber deutlich effizienter, durch die Verarbeitung in einem einzelnen visuellen Verarbeitungspfad erklĂ€ren. Das dritte Projekt soll eine LĂŒcke in bestehenden Modellen zur Beschreibung der Kinematik des Greifens schließen. Alle diese Modelle sind darauf angewiesen, dass ihnen die Zielpunkte der Bewegung vorgegeben werden. Aus der Literatur zu menschlichem und maschinellem Greifen extrahierten wir die plausibelsten Regeln zur Auswahl dieser Zielpunkte. Wir brachten diese Regeln experimentell in Konflikt zueinander und schĂ€tzten auf diese Weise ihren relativen Einfluss. Das Modell wurde anschließend validiert indem wir die besten Greifpunkte fĂŒr einen neuen Satz von Objekten vorhersagten. Mit wenigen Regeln konnten wir so sehr erfolgreich im Vorhinein die vom Menschen prĂ€ferierten Greifpunkte bestimmen
    corecore