8 research outputs found

    Game-Based Simulation and Study of Pedestrian-Automated Vehicle Interactions

    Get PDF
    We identify the need for enhanced pedestrian–vehicle simulation tools and build such a tool to explore the interaction among pedestrian "players" and virtual human- and automated-vehicles for different scenarios taking place in an urban environment. We first present contemporary research tools and then propose the design and development of a new desktop application that facilitates pedestrian-point-of-view research. We then conduct a three-step user experience experiment, in which a small number of participants answer questions before and after using the application to interact with virtual human and automated vehicles in diverse road-crossing scenarios. Behavioral results observed in virtuality, especially when motivated by consequence, tend to simulate real life sufficiently well to inform design choices. From the simulation, we observed valuable insights into human–vehicle interactions. Upon completing this preliminary testing, we iterated the tool’s design and ultimately conducted an 89-participant study of human–vehicle interactions for three scenarios taking place in a virtual environment. Our tool raised participant awareness of autonomous vehicles and their capabilities and limitations, which is an important step in overcoming public distrust of AVs. We additionally saw that participants trust humans and technology less as drivers than in other contexts, and that pedestrians feel safer around vehicles with autonomy indicators. Further, we note that study participants increasingly feel safe with automated vehicles with increased exposure. These preliminary results, as well as the efficacy of the tool’s design, may inform future socio-technical design for automated vehicles and their human interactions

    Visuotactile Sensors with Emphasis on GelSight Sensor: A Review

    Get PDF
    This review paper focuses on vision and touch-based sensors known as visuotactile. The study of visuotactile sensation and perception became a multidisciplinary field of study by philosophers, psychologists, biologists, engineers, technologists, and roboticists in the fields of haptics, machine vision, and artificial intelligence and it dates back centuries. To the best of our knowledge, the earliest records of visuotactile sensor was not applied to robotics and was not even for hand or finger imprint analysis yet for recording the foot pressure distribution of a walking or standing human known as pedobarograph. Our review paper presents the different literature related to visuotactile sensors that lead to a high-resolution miniature pedobarographlike sensor known as the GelSight sensor. Moreover, this review paper focuses on architecture, different techniques, hardware, and software development of GelSight sensor since 2009 with its applications in haptics, robotics, and computer vision

    Dynamic Pose Tracking Performance Evaluation of HTC Vive Virtual Reality System

    Get PDF
    Virtual reality tracking devices are rapidly becoming the go-to system for cost-effective motion tracking solutions across different communities such as robotics, biomechanics, sports, rehabilitation, motion simulators, etc. This article focuses on the spatial tracking performance of HTC Vive's lighthouse tracking system (VLTS) devices (tracker, controller, and head mount display). A comprehensive literature survey on the performance analysis of VLTS on the various aspects is presented along with its shortcomings in terms of spatial tracking evaluation. The two key limitations have been identified: in static cases, there is a lack of standard procedures and criteria, and in dynamic cases, the entire study of spatial tracking. We address the first by assessing VLTS using the optical tracking system standard specified by ASTM International, and the latter by revising the standards to determine the upper-velocity limit for reliable tracking. The findings are substantiated with the trajectories of human wrist motion. Each evaluation's results are systematically analyzed with statistical hypothesis tests and criteria fulfillment. Comau NS16, an industrial serial robot, was used as the ground truth motion generator due to its repeatability and 6 degrees of workspace freedom. One of the major reasons for not having more generalized spatial tracking studies is that the tracking performance heavily depends on the configurations of the setup, work volume, environment, etc. Thus, the guidelines for configuring VLTS and the approach adapted from ASTM standards for evaluating VLTS for custom applications using our reported findings for both static and dynamic cases are included in the appendix

    Virtual reality obstacle crossing: adaptation, retention and transfer to the physical world

    Get PDF
    Virtual reality (VR) paradigms are increasingly being used in movement and exercise sciences with the aim to enhance motor function and stimulate motor adaptation in healthy and pathological conditions. Locomotor training based in VR may be promising for motor skill learning, with transfer of VR skills to the physical world in turn required to benefit functional activities of daily life. This PhD project aims to examine locomotor adaptations to repeated VR obstacle crossing in healthy young adults as well as transfers to the untrained limb and the physical world, and retention potential of the learned skills. For these reasons, the current thesis comprises three studies using controlled VR obstacle crossing interventions during treadmill walking. In the first and second studies we investigated adaptation to crossing unexpectedly appearing virtual obstacles, with and without feedback about crossing performance, and its transfer to the untrained leg. In the third study we investigated transfer of virtual obstacle crossing to physical obstacles of similar size to the virtual ones, that appeared at the same time point within the gait cycle. We also investigated whether the learned skills can be retained in each of the environments over one week. In all studies participants were asked to walk on a treadmill while wearing a VR headset that represented their body as an avatar via real-time synchronised optical motion capture. Participants had to cross virtual and/or physical obstacles with and without feedback about their crossing performance. If applicable, feedback was provided based on motion capture immediately after virtual obstacle crossing. Toe clearance, margin of stability, and lower extremity joint angles in the sagittal plane were calculated for the crossing legs to analyse adaptation, transfer, and retention of obstacle crossing performance. The main outcomes of the first and second studies were that crossing multiple virtual obstacles increased participants’ dynamic stability and led to a nonlinear adaptation of toe clearance that was enhanced by visual feedback about crossing performance. However, independent of the use of feedback, no transfer to the untrained leg was detected. Moreover, despite significant and rapid adaptive changes in locomotor kinematics with repeated VR obstacle crossing, results of the third study revealed limited transfer of learned skills from virtual to physical obstacles. Lastly, despite full retention over one week in the virtual environment we found only partial retention when crossing a physical obstacle while walking on the treadmill. In summary, the findings of this PhD project confirmed that repeated VR obstacle perturbations can effectively stimulate locomotor skill adaptations. However, these are not transferable to the untrained limb irrespective of enhanced awareness and feedback. Moreover, the current data provide evidence that, despite significant adaptive changes in locomotion kinematics with repeated practice of obstacle crossing under VR conditions, transfer to and retention in the physical environment is limited. It may be that perception-action coupling in the virtual environment, and thus sensorimotor coordination, differs from the physical world, potentially inhibiting retained transfer between those two conditions. Accordingly, VR-based locomotor skill training paradigms need to be considered carefully if they are to replace training in the physical world

    Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis

    Get PDF
    In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint

    Fine-grained Haptics: Sensing and Actuating Haptic Primary Colours (force, vibration, and temperature)

    Get PDF
    This thesis discusses the development of a multimodal, fine-grained visual-haptic system for teleoperation and robotic applications. This system is primarily composed of two complementary components: an input device known as the HaptiTemp sensor (combines “Haptics” and “Temperature”), which is a novel thermosensitive GelSight-like sensor, and an output device, an untethered multimodal finegrained haptic glove. The HaptiTemp sensor is a visuotactile sensor that can sense haptic primary colours known as force, vibration, and temperature. It has novel switchable UV markers that can be made visible using UV LEDs. The switchable markers feature is a real novelty of the HaptiTemp because it can be used in the analysis of tactile information from gel deformation without impairing the ability to classify or recognise images. The use of switchable markers in the HaptiTemp sensor is the solution to the trade-off between marker density and capturing high-resolution images using one sensor. The HaptiTemp sensor can measure vibrations by counting the number of blobs or pulses detected per unit time using a blob detection algorithm. For the first time, temperature detection was incorporated into a GelSight-like sensor, making the HaptiTemp sensor a haptic primary colours sensor. The HaptiTemp sensor can also do rapid temperature sensing with a 643 ms response time for the 31°C to 50°C temperature range. This fast temperature response of the HaptiTemp sensor is comparable to the withdrawal reflex response in humans. This is the first time a sensor can trigger a sensory impulse that can mimic a human reflex in the robotic community. The HaptiTemp sensor can also do simultaneous temperature sensing and image classification using a machine vision camera—the OpenMV Cam H7 Plus. This capability of simultaneous sensing and image classification has not been reported or demonstrated by any tactile sensor. The HaptiTemp sensor can be used in teleoperation because it can communicate or transmit tactile analysis and image classification results using wireless communication. The HaptiTemp sensor is the closest thing to the human skin in tactile sensing, tactile pattern recognition, and rapid temperature response. In order to feel what the HaptiTemp sensor is touching from a distance, a corresponding output device, an untethered multimodal haptic hand wearable, is developed to actuate the haptic primary colours sensed by the HaptiTemp sensor. This wearable can communicate wirelessly and has fine-grained cutaneous feedback to feel the edges or surfaces of the tactile images captured by the HaptiTemp sensor. This untethered multimodal haptic hand wearable has gradient kinesthetic force feedback that can restrict finger movements based on the force estimated by the HaptiTemp sensor. A retractable string from an ID badge holder equipped with miniservos that control the stiffness of the wire is attached to each fingertip to restrict finger movements. Vibrations detected by the HaptiTemp sensor can be actuated by the tapping motion of the tactile pins or by a buzzing minivibration motor. There is also a tiny annular Peltier device, or ThermoElectric Generator (TEG), with a mini-vibration motor, forming thermo-vibro feedback in the palm area that can be activated by a ‘hot’ or ‘cold’ signal from the HaptiTemp sensor. The haptic primary colours can also be embedded in a VR environment that can be actuated by the multimodal hand wearable. A VR application was developed to demonstrate rapid tactile actuation of edges, allowing the user to feel the contours of virtual objects. Collision detection scripts were embedded to activate the corresponding actuator in the multimodal haptic hand wearable whenever the tactile matrix simulator or hand avatar in VR collides with a virtual object. The TEG also gets warm or cold depending on the virtual object the participant has touched. Tests were conducted to explore virtual objects in 2D and 3D environments using Leap Motion control and a VR headset (Oculus Quest 2). Moreover, a fine-grained cutaneous feedback was developed to feel the edges or surfaces of a tactile image, such as the tactile images captured by the HaptiTemp sensor, or actuate tactile patterns in 2D or 3D virtual objects. The prototype is like an exoskeleton glove with 16 tactile actuators (tactors) on each fingertip, 80 tactile pins in total, made from commercially available P20 Braille cells. Each tactor can be controlled individually to enable the user to feel the edges or surfaces of images, such as the high-resolution tactile images captured by the HaptiTemp sensor. This hand wearable can be used to enhance the immersive experience in a virtual reality environment. The tactors can be actuated in a tapping manner, creating a distinct form of vibration feedback as compared to the buzzing vibration produced by a mini-vibration motor. The tactile pin height can also be varied, creating a gradient of pressure on the fingertip. Finally, the integration of the high-resolution HaptiTemp sensor, and the untethered multimodal, fine-grained haptic hand wearable is presented, forming a visuotactile system for sensing and actuating haptic primary colours. Force, vibration, and temperature sensing tests with corresponding force, vibration, and temperature actuating tests have demonstrated a unified visual-haptic system. Aside from sensing and actuating haptic primary colours, touching the edges or surfaces of the tactile images captured by the HaptiTemp sensor was carried out using the fine-grained cutaneous feedback of the haptic hand wearable

    Proceedings of the 9th Arab Society for Computer Aided Architectural Design (ASCAAD) international conference 2021 (ASCAAD 2021): architecture in the age of disruptive technologies: transformation and challenges.

    Get PDF
    The ASCAAD 2021 conference theme is Architecture in the age of disruptive technologies: transformation and challenges. The theme addresses the gradual shift in computational design from prototypical morphogenetic-centered associations in the architectural discourse. This imminent shift of focus is increasingly stirring a debate in the architectural community and is provoking a much needed critical questioning of the role of computation in architecture as a sole embodiment and enactment of technical dimensions, into one that rather deliberately pursues and embraces the humanities as an ultimate aspiration
    corecore