17 research outputs found

    Autoencoding sensory substitution

    Get PDF
    Tens of millions of people live blind, and their number is ever increasing. Visual-to-auditory sensory substitution (SS) encompasses a family of cheap, generic solutions to assist the visually impaired by conveying visual information through sound. The required SS training is lengthy: months of effort is necessary to reach a practical level of adaptation. There are two reasons for the tedious training process: the elongated substituting audio signal, and the disregard for the compressive characteristics of the human hearing system. To overcome these obstacles, we developed a novel class of SS methods, by training deep recurrent autoencoders for image-to-sound conversion. We successfully trained deep learning models on different datasets to execute visual-to-auditory stimulus conversion. By constraining the visual space, we demonstrated the viability of shortened substituting audio signals, while proposing mechanisms, such as the integration of computational hearing models, to optimally convey visual features in the substituting stimulus as perceptually discernible auditory components. We tested our approach in two separate cases. In the first experiment, the author went blindfolded for 5 days, while performing SS training on hand posture discrimination. The second experiment assessed the accuracy of reaching movements towards objects on a table. In both test cases, above-chance-level accuracy was attained after a few hours of training. Our novel SS architecture broadens the horizon of rehabilitation methods engineered for the visually impaired. Further improvements on the proposed model shall yield hastened rehabilitation of the blind and a wider adaptation of SS devices as a consequence

    Fine-grained Haptics: Sensing and Actuating Haptic Primary Colours (force, vibration, and temperature)

    Get PDF
    This thesis discusses the development of a multimodal, fine-grained visual-haptic system for teleoperation and robotic applications. This system is primarily composed of two complementary components: an input device known as the HaptiTemp sensor (combines “Haptics” and “Temperature”), which is a novel thermosensitive GelSight-like sensor, and an output device, an untethered multimodal finegrained haptic glove. The HaptiTemp sensor is a visuotactile sensor that can sense haptic primary colours known as force, vibration, and temperature. It has novel switchable UV markers that can be made visible using UV LEDs. The switchable markers feature is a real novelty of the HaptiTemp because it can be used in the analysis of tactile information from gel deformation without impairing the ability to classify or recognise images. The use of switchable markers in the HaptiTemp sensor is the solution to the trade-off between marker density and capturing high-resolution images using one sensor. The HaptiTemp sensor can measure vibrations by counting the number of blobs or pulses detected per unit time using a blob detection algorithm. For the first time, temperature detection was incorporated into a GelSight-like sensor, making the HaptiTemp sensor a haptic primary colours sensor. The HaptiTemp sensor can also do rapid temperature sensing with a 643 ms response time for the 31°C to 50°C temperature range. This fast temperature response of the HaptiTemp sensor is comparable to the withdrawal reflex response in humans. This is the first time a sensor can trigger a sensory impulse that can mimic a human reflex in the robotic community. The HaptiTemp sensor can also do simultaneous temperature sensing and image classification using a machine vision camera—the OpenMV Cam H7 Plus. This capability of simultaneous sensing and image classification has not been reported or demonstrated by any tactile sensor. The HaptiTemp sensor can be used in teleoperation because it can communicate or transmit tactile analysis and image classification results using wireless communication. The HaptiTemp sensor is the closest thing to the human skin in tactile sensing, tactile pattern recognition, and rapid temperature response. In order to feel what the HaptiTemp sensor is touching from a distance, a corresponding output device, an untethered multimodal haptic hand wearable, is developed to actuate the haptic primary colours sensed by the HaptiTemp sensor. This wearable can communicate wirelessly and has fine-grained cutaneous feedback to feel the edges or surfaces of the tactile images captured by the HaptiTemp sensor. This untethered multimodal haptic hand wearable has gradient kinesthetic force feedback that can restrict finger movements based on the force estimated by the HaptiTemp sensor. A retractable string from an ID badge holder equipped with miniservos that control the stiffness of the wire is attached to each fingertip to restrict finger movements. Vibrations detected by the HaptiTemp sensor can be actuated by the tapping motion of the tactile pins or by a buzzing minivibration motor. There is also a tiny annular Peltier device, or ThermoElectric Generator (TEG), with a mini-vibration motor, forming thermo-vibro feedback in the palm area that can be activated by a ‘hot’ or ‘cold’ signal from the HaptiTemp sensor. The haptic primary colours can also be embedded in a VR environment that can be actuated by the multimodal hand wearable. A VR application was developed to demonstrate rapid tactile actuation of edges, allowing the user to feel the contours of virtual objects. Collision detection scripts were embedded to activate the corresponding actuator in the multimodal haptic hand wearable whenever the tactile matrix simulator or hand avatar in VR collides with a virtual object. The TEG also gets warm or cold depending on the virtual object the participant has touched. Tests were conducted to explore virtual objects in 2D and 3D environments using Leap Motion control and a VR headset (Oculus Quest 2). Moreover, a fine-grained cutaneous feedback was developed to feel the edges or surfaces of a tactile image, such as the tactile images captured by the HaptiTemp sensor, or actuate tactile patterns in 2D or 3D virtual objects. The prototype is like an exoskeleton glove with 16 tactile actuators (tactors) on each fingertip, 80 tactile pins in total, made from commercially available P20 Braille cells. Each tactor can be controlled individually to enable the user to feel the edges or surfaces of images, such as the high-resolution tactile images captured by the HaptiTemp sensor. This hand wearable can be used to enhance the immersive experience in a virtual reality environment. The tactors can be actuated in a tapping manner, creating a distinct form of vibration feedback as compared to the buzzing vibration produced by a mini-vibration motor. The tactile pin height can also be varied, creating a gradient of pressure on the fingertip. Finally, the integration of the high-resolution HaptiTemp sensor, and the untethered multimodal, fine-grained haptic hand wearable is presented, forming a visuotactile system for sensing and actuating haptic primary colours. Force, vibration, and temperature sensing tests with corresponding force, vibration, and temperature actuating tests have demonstrated a unified visual-haptic system. Aside from sensing and actuating haptic primary colours, touching the edges or surfaces of the tactile images captured by the HaptiTemp sensor was carried out using the fine-grained cutaneous feedback of the haptic hand wearable

    KEER2022

    Get PDF
    AvanttĂ­tol: KEER2022. DiversitiesDescripciĂł del recurs: 25 juliol 202

    Affecting Fundamental Transformation in Future Construction Work Through Replication of the Master-Apprentice Learning Model in Human-Robot Worker Teams

    Full text link
    Construction robots continue to be increasingly deployed on construction sites to assist human workers in various tasks to improve safety, efficiency, and productivity. Due to the recent and ongoing growth in robot capabilities and functionalities, humans and robots are now able to work side-by-side and share workspaces. However, due to inherent safety and trust-related concerns, human-robot collaboration is subject to strict safety standards that require robot motion and forces to be sensitive to proximate human workers. In addition, construction robots are required to perform construction tasks in unstructured and cluttered environments. The tasks are quasi-repetitive, and robots need to handle unexpected circumstances arising from loose tolerances and discrepancies between as-designed and as-built work. It is therefore impractical to pre-program construction robots or apply optimization methods to determine robot motion trajectories for the performance of typical construction work. This research first proposes a new taxonomy for human-robot collaboration on construction sites, which includes five levels: Pre-Programming, Adaptive Manipulation, Imitation Learning, Improvisatory Control, and Full Autonomy, and identifies the gaps existing in knowledge transfer between humans and assisting robots. In an attempt to address the identified gaps, this research focuses on three key studies: enabling construction robots to estimate their pose ubiquitously within the workspace (Pose Estimation), robots learning to perform construction tasks from human workers (Learning from Demonstration), and robots synchronizing their work plans with human collaborators in real-time (Digital Twin). First, this dissertation investigates the use of cameras as a novel sensor system for estimating the pose of large-scale robotic manipulators relative to the job sites. A deep convolutional network human pose estimation algorithm was adapted and fused with sensor-based poses to provide real-time uninterrupted 6-DOF pose estimates of the manipulator’s components. The network was trained with image datasets collected from a robotic excavator in the laboratory and conventional excavators on construction sites. The proposed system yielded an uninterrupted and centimeter-level accuracy pose estimation system for articulated construction robots. Second, this dissertation investigated Robot Learning from Demonstration (LfD) methods to teach robots how to perform quasi-repetitive construction tasks, such as the ceiling tile installation process. LfD methods have the potential to be used in teaching robots specific tasks through human demonstration, such that the robots can then perform the same tasks under different conditions. A visual LfD and a trajectory LfD methods are developed to incorporate the context translation model, Reinforcement Learning method, and generalized cylinders with orientation approach to generate the control policy for the robot to perform the subsequent tasks. The evaluated results in the Gazebo robotics simulator confirm the promise and applicability of the LfD method in teaching robot apprentices to perform quasi-repetitive tasks on construction sites. Third, this dissertation explores a safe working environment for human workers and robots. Robot simulations in online Digital Twins can be used to extend designed construction models, such as BIM (Building Information Models), to the construction phase for real-time monitoring of robot motion planning and control. A bi-directional communication system was developed to bridge robot simulations and physical robots in construction and digital fabrication. Through empirical studies, the high accuracy of the pose synchronization between physical and virtual robots demonstrated the potential for ensuring safety during proximate human-robot co-work.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169666/1/cjliang_1.pd

    Selected Papers from the 5th International Electronic Conference on Sensors and Applications

    Get PDF
    This Special Issue comprises selected papers from the proceedings of the 5th International Electronic Conference on Sensors and Applications, held on 15–30 November 2018, on sciforum.net, an online platform for hosting scholarly e-conferences and discussion groups. In this 5th edition of the electronic conference, contributors were invited to provide papers and presentations from the field of sensors and applications at large, resulting in a wide variety of excellent submissions and topic areas. Papers which attracted the most interest on the web or that provided a particularly innovative contribution were selected for publication in this collection. These peer-reviewed papers are published with the aim of rapid and wide dissemination of research results, developments, and applications. We hope this conference series will grow rapidly in the future and become recognized as a new way and venue by which to (electronically) present new developments related to the field of sensors and their applications

    New Research in Children with Neurodevelopmental Disorders

    Get PDF
    This book collects recent research in the field of care for neurodevelopmental disorders, emphasizing transdisciplinary work in clinical, educational and family contexts. It presents an opportunity to learn about the impact of participation on children and adolescents with neurodevelopmental disorders. Mainly, new therapeutic approaches are presented in children and adolescents with autism spectrum disorder, attention-deficit/hyperactivity disorder, or motor coordination disorders

    Proceedings of the 19th Sound and Music Computing Conference

    Get PDF
    Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Étienne (France). https://smc22.grame.f
    corecore