80 research outputs found

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Shared Control for Wheelchair Interfaces

    Get PDF
    Independent mobility is fundamental to the quality of life of people with impairment. Most people with severe mobility impairments, whether congenital, e.g., from cerebral palsy, or acquired, e.g., from spinal cord injury, are prescribed a wheelchair. A small yet significant number of people are unable to use a typical powered wheelchair controlled with a joystick. Instead, some of these people require alternative interfaces such as a head- array or Sip/Puff switch to drive their powered wheelchairs. However, these alternative interfaces do not work for everyone and often cause frustration, fatigue and collisions. This thesis develops a novel technique to help improve the usability of some of these alternative interfaces, in particular, the head-array and Sip/Puff switch. Control is shared between a powered wheelchair user, using an alternative interface and a pow- ered wheelchair fitted with sensors. This shared control then produces a resulting motion that is close to what the user desires to do but a motion that is also safe. A path planning algorithm on the wheelchair is implemented using techniques in mo- bile robotics. Afterwards, the output of the path planning algorithm and the user’s com- mand are both modelled as random variables. These random variables are then blended in a joint probability distribution where the final velocity to the wheelchair is the one that maximises the joint probability distribution. The performance of the probabilistic approach to blending the user’s inputs with the output of a path planner, is benchmarked against the most common form of shared control called linear blending. The benchmarking consists of several experiments with end users both in a simulated world and in the real-world. The thesis concludes that probabilistic shared control provides safer motion compared with the traditional shared control for difficult tasks and hard-to-use interfaces

    DEVELOPMENT AND ASSESSMENT OF ADVANCED ASSISTIVE ROBOTIC MANIPULATORS USER INTERFACES

    Get PDF
    Text BoxAssistive Robotic Manipulators (ARM) have shown improvement in self-care and increased independence among people with severe upper extremity disabilities. With an ARM mounted on the side of an electric powered wheelchair, an ARM may provide manipulation assistance, such as picking up object, eating, drinking, dressing, reaching out, or opening doors. However, existing assessment tools are inconsistent between studies, time consuming, and unclear in clinical effectiveness. Therefore, in this research, we have developed an ADL task board evaluation tool that provides standardized, efficient, and reliable assessment of ARM performance. Among powered wheelchair users and able-bodied controls using two commercial ARM user interfaces – joystick and keypad, we found that there were statistical differences between both user interface performances, but no statistical difference was found in the cognitive loading. The ADL task board demonstrated highly correlated performance with an existing functional assessment tool, Wolf Motor Function Test. Through this study, we have also identified barriers and limits in current commercial user interfaces and developed smartphone and assistive sliding-autonomy user interfaces that yields improved performance. Testing results from our smartphone manual interface revealed statistically faster performance. The assistive sliding-autonomy interface helped seamlessly correct the error seen with autonomous functions. The ADL task performance evaluation tool may help clinicians and researchers better access ARM user interfaces and evaluated the efficacy of customized user interfaces to improve performance. The smartphone manual interface demonstrated improved performance and the sliding-autonomy framework showed enhanced success with tasks without recalculating path planning and recognition

    Aerospace medicine and biology: A cumulative index to a continuing bibliography (supplement 384)

    Get PDF
    This publication is a cumulative index to the abstracts contained in Supplements 372 through 383 of Aerospace Medicine and Biology: A Continuing Bibliography. It includes seven indexes: subject, personal author, corporate source, foreign technology, contract number, report number, and accession number

    GRASP News Volume 9, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory

    Rehabilitation Engineering

    Get PDF
    Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    \u3cem\u3eGRASP News\u3c/em\u3e: Volume 9, Number 1

    Get PDF
    The past year at the GRASP Lab has been an exciting and productive period. As always, innovation and technical advancement arising from past research has lead to unexpected questions and fertile areas for new research. New robots, new mobile platforms, new sensors and cameras, and new personnel have all contributed to the breathtaking pace of the change. Perhaps the most significant change is the trend towards multi-disciplinary projects, most notable the multi-agent project (see inside for details on this, and all the other new and on-going projects). This issue of GRASP News covers the developments for the year 1992 and the first quarter of 1993

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Hybrid wheelchair controller for handicapped and quadriplegic patients

    Get PDF
    In this dissertation, a hybrid wheelchair controller for handicapped and quadriplegic patient is proposed. The system has two sub-controllers which are the voice controller and the head tilt controller. The system aims to help quadriplegic, handicapped, elderly and paralyzed patients to control a robotic wheelchair using voice commands and head movements instead of a traditional joystick controller. The multi-input design makes the system more flexible to adapt to the available body signals. The low-cost design is taken into consideration as it allows more patients to use this system
    • …
    corecore