30 research outputs found

    CES-513 Stages for Developing Control Systems using EMG and EEG Signals: A survey

    Get PDF
    Bio-signals such as EMG (Electromyography), EEG (Electroencephalography), EOG (Electrooculogram), ECG (Electrocardiogram) have been deployed recently to develop control systems for improving the quality of life of disabled and elderly people. This technical report aims to review the current deployment of these state of the art control systems and explain some challenge issues. In particular, the stages for developing EMG and EEG based control systems are categorized, namely data acquisition, data segmentation, feature extraction, classification, and controller. Some related Bio-control applications are outlined. Finally a brief conclusion is summarized.

    Head movements based control of an intelligent wheelchair in an indoor environment

    Get PDF
    This paper presents a user-friendly human machine interface (HMI) for hands-free control of an electric powered wheelchair (EPW). Its two operation modes are based on head movements: Mode 1 uses only one head movement to give the commands, and Mode 2 employs four head movements. An EEG device, namely Emotiv EPOC, has been deployed in this HMI to obtain the head movement information of users. The proposed HMI is compared with the joystick control of an EPW in an indoor environment. The experimental results show that Control Mode 2 can be implemented at a fast speed reliably, achieving a mean time of 67.90 seconds for the two subjects. However, Control Mode 1 has inferior performance, achieving a mean time of 153.20 seconds for the two subjects although it needs only one head movement. It is clear that the proposed HMI can be effectively used to replace the traditional joystick control for disabled and elderly people

    Proper motions of the HH1 jet

    Get PDF
    We describe a new method for determining proper motions of extended objects, and a pipeline developed for the application of this method. We then apply this method to an analysis of four epochs of [S~II] HST images of the HH~1 jet (covering a period of ∌20\sim 20~yr). We determine the proper motions of the knots along the jet, and make a reconstruction of the past ejection velocity time-variability (assuming ballistic knot motions). This reconstruction shows an "acceleration" of the ejection velocities of the jet knots, with higher velocities at more recent times. This acceleration will result in an eventual merging of the knots in ∌450\sim 450~yr and at a distance of ∌80"\sim 80" from the outflow source, close to the present-day position of HH~1.Comment: 12 pages, 8 figure

    Head movement and facial expression-based human-machine interface for controlling an intelligent wheelchair

    No full text
    This paper presents a human machine interface (HMI) for hands-free control of an electric powered wheelchair (EPW) based on head movements and facial expressions detected by using the gyroscope and 'cognitiv suite' of an Emotiv EPOC device, respectively. The proposed HMI provides two control modes: 1) control mode 1 uses four head movements to display in its graphical user interface the control commands that the user wants to execute and one facial expression to confirm its execution; 2) control mode 2 employs two facial expressions for turning and forward motion, and one head movement for stopping the wheelchair. Therefore, both control modes offer hands-free control of the wheelchair. Two subjects have used the two control modes to operate a wheelchair in an indoor environment. Five facial expressions have been tested in order to determine if the users can employ different facial expressions for executing the commands. The experimental results show that the proposed HMI is reliable for operating the wheelchair safely

    Bi-modal Human Machine Interface for Controlling an Intelligent Wheelchair

    No full text
    This paper presents a bi-modal human machine interface (HMI) alternative for hands-free control of an electric powered wheelchair (EPW) by means of head movements and facial expressions. The head movements and the facial expressions are detected by using the gyroscope and the cognitiv suite of the Emotiv EPOC sensor, respectively. By employing the cognitiv suite, the user can choose his/her most comfortable facial expressions. Three head movements are used to stop the wheelchair and display the turning commands in the graphical interface (GI) of the HMI, while two facial expressions are employed to move forward the wheelchair and confirm the execution of the turning command displayed on the GI of the HMI. By doing this, the user is free of turning his/her head while the wheelchair is being controlled without the execution of an undesired command. Two subjects have tested the proposed HMI by operating a wheelchair in an indoor environment. Furthermore, five facial expressions have been tested in order to determine that the users can employ different facial expressions for executing the control commands on the wheelchair. The preliminary experiments reveal that our HMI is reliable for operating the wheelchair. © 2013 IEEE

    A Graph Representation Composed of Geometrical Components for Household Furniture Detection by Autonomous Mobile Robots

    No full text
    International audienceThis study proposes a framework to detect and recognize household furniture using autonomous mobile robots. The proposed methodology is based on the analysis and integration of geometric features extracted over 3D point clouds. A relational graph is constructed using those features to model and recognize each piece of furniture. A set of sub-graphs corresponding to different partial views allows matching the robot's perception with partial furniture models. A reduced set of geometric features is employed: horizontal and vertical planes and the legs of the furniture. These features are characterized through their properties, such as: height, planarity and area. A fast and linear method for the detection of some geometric features is proposed, which is based on histograms of 3D points acquired from an RGB-D camera onboard the robot. Similarity measures for geometric features and graphs are proposed, as well. Our proposal has been validated in home-like environments with two different mobile robotic platforms; and partially on some 3D samples of a database

    Design and Implementation of Composed Position/Force Controllers for Object Manipulation

    No full text
    In the design of a controller for grasping objects through a robotic manipulator, there are two key problems: to find the position of the object to be grasped accurately, and to apply the appropriate force to each finger to handle the object properly without causing undesirable movement of it during its manipulation. A proportional-integral-derivative (PID) controller is widely used to grasp objects in robotics; however, its main shortcomings are its sensitivity to controller gains, sluggish response, and high starting overshooting. This research presents three coupled (position/force) controllers for object manipulation using an assembled robotic manipulator (i.e., a gripper attached to a robotic arm mounted on a mobile robot). Specifically, an angular gripper was employed in this study, which was composed of two independent fingers with a piezoelectric force sensor attached to each fingertip. The main contributions of this study are the designs and implementations of three controllers: a classic PID controller, a type-I controller, and a type-II fuzzy controller. These three controllers were used to find an object to be grasped properly (position) and apply an equivalent force to each finger (force)

    Predicting collisions: time-to-contact forecasting based on probabilistic segmentation and system identification

    No full text
    International audienceThe Time-to-contact (TTC) estimate is mainly used in robotics navigation, in order to detect potential danger with obstacles in the environment. A key aspect in a robotic system is to perform its tasks promptly. Several approaches have been proposed to estimate reliable TTC in order to avoid collisions in real-time; nevertheless they are time consuming due to a calculation of scene characteristics in every frame. This paper presents an approach to estimate TTC using monocular vision based on the size change of the obstacles over time (); therefore, the robotic system may react promptly to its environment. Our approach collects information from few data of an obstacle, then the behavior of the movement is found through an online recursive modeling process, and finally, a forecasting of the upcoming positions is computed. We segment the obstacles using probabilistic hidden Markov chains. Our proposal is compared to a classical color segmentation approach using two real image sequences, each sequence is composed of 210 frames. Our results show that our proposal obtained smoother segmentations than a traditional color-based approach
    corecore