4,357 research outputs found

    Visual Flow-based Programming Plugin for Brain Computer Interface in Computer-Aided Design

    Full text link
    Over the last half century, the main application of Brain Computer Interfaces, BCIs has been controlling wheelchairs and neural prostheses or generating text or commands for people with restricted mobility. There has been very limited attention in the field to applications for computer aided design, despite the potential of BCIs to provide a new form of environmental interaction. In this paper we introduce the development and application of Neuron, a novel BCI tool that enables designers with little experience in neuroscience or computer programming to gain access to neurological data, along with established metrics relevant to design, create BCI interaction prototypes, both with digital onscreen objects and physical devices, and evaluate designs based on neurological information and record measurements for further analysis. After discussing the BCI tool development, the article presents its capabilities through two case studies, along with a brief evaluation of the tool performance and a discussion of implications, limitations, and future improvement

    Anticipatory reactions to erotic stimuli: An exploration into psychic ability

    Get PDF
    The current study investigated psi ability (precognition) based on Bem’s (2011) experiments. The study used a computer-based program that tested for the prediction of erotic stimuli via erotic and non-erotic images. Sensation seeking and cortisol were explored as moderators of psi ability. Participants provided saliva samples at the beginning and end of the study for a measure of cortisol. It was predicted that participants would detect the future position of erotic images significantly more than they would by chance and more than non-erotic images. Additionally, it was predicted that those who scored high in sensation seeking would have greater psi ability than those with lower sensation seeking scores. Further, it was predicted that there would be an inverse relationship with baseline salivary cortisol levels, such that the higher the sensation seeking score, the lower the baseline cortisol level. Results indicated that participants did not predict the future position of erotic images significantly more than chance levels. Further, there was no relationship found between sensation seeking and psi ability and baseline cortisol levels

    The role of haptic communication in dyadic collaborative object manipulation tasks

    Get PDF
    Intuitive and efficient physical human-robot collaboration relies on the mutual observability of the human and the robot, i.e. the two entities being able to interpret each other's intentions and actions. This is remedied by a myriad of methods involving human sensing or intention decoding, as well as human-robot turn-taking and sequential task planning. However, the physical interaction establishes a rich channel of communication through forces, torques and haptics in general, which is often overlooked in industrial implementations of human-robot interaction. In this work, we investigate the role of haptics in human collaborative physical tasks, to identify how to integrate physical communication in human-robot teams. We present a task to balance a ball at a target position on a board either bimanually by one participant, or dyadically by two participants, with and without haptic information. The task requires that the two sides coordinate with each other, in real-time, to balance the ball at the target. We found that with training the completion time and number of velocity peaks of the ball decreased, and that participants gradually became consistent in their braking strategy. Moreover we found that the presence of haptic information improved the performance (decreased completion time) and led to an increase in overall cooperative movements. Overall, our results show that humans can better coordinate with one another when haptic feedback is available. These results also highlight the likely importance of haptic communication in human-robot physical interaction, both as a tool to infer human intentions and to make the robot behaviour interpretable to humans

    Development of an automated robot vision component handling system

    Get PDF
    Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013In the industry, automation is used to optimize production, improve product quality and increase profitability. By properly implementing automation systems, the risk of injury to workers can be minimized. Robots are used in many low-level tasks to perform repetitive, undesirable or dangerous work. Robots can perform a task with higher precision and accuracy to lower errors and waste of material. Machine Vision makes use of cameras, lighting and software to do visual inspections that a human would normally do. Machine Vision is useful in application where repeatability, high speed and accuracy are important. This study concentrates on the development of a dedicated robot vision system to automatically place components exiting from a conveyor system onto Automatic Guided Vehicles (AGV). A personal computer (PC) controls the automated system. Software modules were developed to do image processing for the Machine Vision system as well as software to control a Cartesian robot. These modules were integrated to work in a real-time system. The vision system is used to determine the parts‟ position and orientation. The orientation data are used to rotate a gripper and the position data are used by the Cartesian robot to position the gripper over the part. Hardware for the control of the gripper, pneumatics and safety systems were developed. The automated system‟s hardware was integrated by the use of the different communication protocols, namely DeviceNet (Cartesian robot), RS-232 (gripper) and Firewire (camera)

    Decoding illusory self-location from activity in the human hippocampus

    Get PDF
    Decades of research have demonstrated a role for the hippocampus in spatial navigation and episodic and spatial memory. However, empirical evidence linking hippocampal activity to the perceptual experience of being physically located at a particular place in the environment is lacking. In this study, we used a multisensory out-of-body illusion to perceptually ‘teleport’ six healthy participants between two different locations in the scanner room during high-resolution functional magnetic resonance imaging (fMRI). The participants were fitted with MRI-compatible head-mounted displays that changed their first-person visual perspective to that of a pair of cameras placed in one of two corners of the scanner room. To elicit the illusion of being physically located in this position, we delivered synchronous visuo-tactile stimulation in the form of an object moving toward the cameras coupled with touches applied to the participant’s chest. Asynchronous visuo-tactile stimulation did not induce the illusion and served as a control condition. We found that illusory self-location could be successfully decoded from patterns of activity in the hippocampus in all of the participants in the synchronous (P 0.05). At the group-level, the decoding accuracy was significantly higher in the synchronous than in the asynchronous condition (P = 0.012). These findings associate hippocampal activity with the perceived location of the bodily self in space, which suggests that the human hippocampus is involved not only in spatial navigation and memory but also in the construction of our sense of bodily self-location

    Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits

    Full text link
    To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a full-stack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments. Project website: https://siddancha.github.io/projects/active-velocity-estimation/Comment: 9 pages (main paper), 3 pages (references), 9 pages (appendix
    • …
    corecore