CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Eye and Voice-Controlled Human Machine Interface System for Wheelchairs Using Image Gradient Approach
Authors
Saba Anwer
Shahid Ikramullah Butt
+7 more
Imran Khan Niazi
Amit N. Pujari
Moaz Sarwar
Muhammad Shafique
Hajrah Sultan
Asim Waris
Muhammad Hamza Zafar
Publication date
26 September 2020
Publisher
'MDPI AG'
Doi
Cite
Abstract
© 2020 The Author(s). This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Rehabilitative mobility aids are being used extensively for physically impaired people. Efforts are being made to develop human machine interfaces (HMIs), manipulating the biosignals to better control the electromechanical mobility aids, especially the wheelchairs. Creating precise control commands such as move forward, left, right, backward and stop, via biosignals, in an appropriate HMI is the actual challenge, as the people with a high level of disability (quadriplegia and paralysis, etc.) are unable to drive conventional wheelchairs. Therefore, a novel system driven by optical signals addressing the needs of such a physically impaired population is introduced in this paper. The present system is divided into two parts: the first part comprises of detection of eyeball movements together with the processing of the optical signal, and the second part encompasses the mechanical assembly module, i.e., control of the wheelchair through motor driving circuitry. A web camera is used to capture real-time images. The processor used is Raspberry-Pi with Linux operating system. In order to make the system more congenial and reliable, the voice-controlled mode is incorporated in the wheelchair. To appraise the system’s performance, a basic wheelchair skill test (WST) is carried out. Basic skills like movement on plain and rough surfaces in forward, reverse direction and turning capability were analyzed for easier comparison with other existing wheelchair setups on the bases of controlling mechanisms, compatibility, design models, and usability in diverse conditions. System successfully operates with average response time of 3 s for eye and 3.4 s for voice control mode.Peer reviewedFinal Published versio
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
University of Hertfordshire Research Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:uhra.herts.ac.uk:2299/2319...
Last time updated on 05/10/2020
Multidisciplinary Digital Publishing Institute
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:mdpi.com:/1424-8220/20/19/...
Last time updated on 21/10/2022
University of Hertfordshire Research Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:uhra.herts.ac.uk:2299/2352...
Last time updated on 28/11/2020
University of Hertfordshire Research Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:uhra.herts.ac.uk:2299/2320...
Last time updated on 05/10/2020
Aberdeen University Research
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:aura.abdn.ac.uk:2164/15193
Last time updated on 05/10/2020
VBN
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:pure.atira.dk:publications...
Last time updated on 24/11/2020