78 research outputs found

    The Tongue Enables Computer and Wheelchair Control for People with Spinal Cord Injury

    Get PDF
    The Tongue Drive System (TDS) is a wireless and wearable assistive technology, designed to allow individuals with severe motor impairments such as tetraplegia to access their environment using voluntary tongue motion. Previous TDS trials used a magnetic tracer temporarily attached to the top surface of the tongue with tissue adhesive. We investigated TDS efficacy for controlling a computer and driving a powered wheelchair in two groups of able-bodied subjects and a group of volunteers with spinal cord injury (SCI) at C6 or above. All participants received a magnetic tongue barbell and used the TDS for five to six consecutive sessions. The performance of the group was compared for TDS versus keypad and TDS versus a sip-and-puff device (SnP) using accepted measures of speed and accuracy. All performance measures improved over the course of the trial. The gap between keypad and TDS performance narrowed for able-bodied subjects. Despite participants with SCI already having familiarity with the SnP, their performance measures were up to three times better with the TDS than with the SnP and continued to improve. TDS flexibility and the inherent characteristics of the human tongue enabled individuals with high-level motor impairments to access computers and drive wheelchairs at speeds that were faster than traditional assistive technologies but with comparable accuracy

    The development of a SmartAbility Framework to enhance multimodal interaction for people with reduced physical ability.

    Get PDF
    Assistive technologies are an evolving market due to the number of people worldwide who have conditions resulting in reduced physical ability (also known as disability). Various classification schemes exist to categorise disabilities, as well as government legislations to ensure equal opportunities within the community. However, there is a notable absence of a process to map physical conditions to technologies in order to improve Quality of Life for this user group. This research is characterised primarily under the Human Computer Interaction (HCI) domain, although aspects of Systems of Systems (SoS) and Assistive Technologies have been applied. The thesis focuses on examples of multimodal interactions leading to the development of a SmartAbility Framework that aims to assist people with reduced physical ability by utilising their abilities to suggest interaction mediums and technologies. The framework was developed through a predominantly Interpretivism methodology approach consisting of a variety of research methods including state- of-the-art literature reviews, requirements elicitation, feasibility trials and controlled usability evaluations to compare multimodal interactions. The developed framework was subsequently validated through the involvement of the intended user community and domain experts and supported by a concept demonstrator incorporating the SmartATRS case study. The aim and objectives of this research were achieved through the following key outputs and findings: - A comprehensive state-of-the-art literature review focussing on physical conditions and their classifications, HCI concepts relevant to multimodal interaction (Ergonomics of human-system interaction, Design For All and Universal Design), SoS definition and analysis techniques involving System of Interest (SoI), and currently-available products with potential uses as assistive technologies. - A two-phased requirements elicitation process applying surveys and semi-structured interviews to elicit the daily challenges for people with reduced physical ability, their interests in technology and the requirements for assistive technologies obtained through collaboration with a manufacturer. - Findings from feasibility trials involving monitoring brain activity using an electroencephalograph (EEG), tracking facial features through Tracking Learning Detection (TLD), applying iOS Switch Control to track head movements and investigating smartglasses. - Results of controlled usability evaluations comparing multimodal interactions with the technologies deemed to be feasible from the trials. The user community of people with reduced physical ability were involved during the process to maximise the usefulness of the data obtained. - An initial SmartDisability Framework developed from the results and observations ascertained through requirements elicitation, feasibility trials and controlled usability evaluations, which was validated through an approach of semi-structured interviews and a focus group. - An enhanced SmartAbility Framework to address the SmartDisability validation feedback by reducing the number of elements, using simplified and positive terminology and incorporating concepts from Quality Function Deployment (QFD). - A final consolidated version of the SmartAbility Framework that has been validated through semi-structured interviews with additional domain experts and addressed all key suggestions. The results demonstrated that it is possible to map technologies to people with physical conditions by considering the abilities that they can perform independently without external support and the exertion of significant physical effort. This led to a realisation that the term ‘disability’ has a negative connotation that can be avoided through the use of the phrase ‘reduced physical ability’. It is important to promote this rationale to the wider community, through exploitation of the framework. This requires a SmartAbility smartphone application to be developed that allows users to input their abilities in order for recommendations of interaction mediums and technologies to be provided

    Non-Intrusive Subscriber Authentication for Next Generation Mobile Communication Systems

    Get PDF
    Merged with duplicate record 10026.1/753 on 14.03.2017 by CS (TIS)The last decade has witnessed massive growth in both the technological development, and the consumer adoption of mobile devices such as mobile handsets and PDAs. The recent introduction of wideband mobile networks has enabled the deployment of new services with access to traditionally well protected personal data, such as banking details or medical records. Secure user access to this data has however remained a function of the mobile device's authentication system, which is only protected from masquerade abuse by the traditional PIN, originally designed to protect against telephony abuse. This thesis presents novel research in relation to advanced subscriber authentication for mobile devices. The research began by assessing the threat of masquerade attacks on such devices by way of a survey of end users. This revealed that the current methods of mobile authentication remain extensively unused, leaving terminals highly vulnerable to masquerade attack. Further investigation revealed that, in the context of the more advanced wideband enabled services, users are receptive to many advanced authentication techniques and principles, including the discipline of biometrics which naturally lends itself to the area of advanced subscriber based authentication. To address the requirement for a more personal authentication capable of being applied in a continuous context, a novel non-intrusive biometric authentication technique was conceived, drawn from the discrete disciplines of biometrics and Auditory Evoked Responses. The technique forms a hybrid multi-modal biometric where variations in the behavioural stimulus of the human voice (due to the propagation effects of acoustic waves within the human head), are used to verify the identity o f a user. The resulting approach is known as the Head Authentication Technique (HAT). Evaluation of the HAT authentication process is realised in two stages. Firstly, the generic authentication procedures of registration and verification are automated within a prototype implementation. Secondly, a HAT demonstrator is used to evaluate the authentication process through a series of experimental trials involving a representative user community. The results from the trials confirm that multiple HAT samples from the same user exhibit a high degree of correlation, yet samples between users exhibit a high degree of discrepancy. Statistical analysis of the prototypes performance realised early system error rates of; FNMR = 6% and FMR = 0.025%. The results clearly demonstrate the authentication capabilities of this novel biometric approach and the contribution this new work can make to the protection of subscriber data in next generation mobile networks.Orange Personal Communication Services Lt

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private Audiokanäle anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme während dem Pendeln oder zum freihändigen Telefonieren. Dank diesem eindeutigen primären Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stärker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese Geräte sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die Funktionalität von Kopfhörern zu erweitern. Die räumliche Nähe von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform für die Erfassung einer Vielzahl von Eigenschaften, Prozessen und Aktivitäten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollständig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche Sensorikansätze erforscht werden, welche die Erkennung von bisher unzugänglichen Phänomenen ermöglichen. Durch die Einführung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher Fähigkeiten zu etablieren. Um eine fundierte Grundlage für die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-Phänomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und Aktivität, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesünderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. Darüber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher Aktivitätserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige Ansätze. Darüber hinaus wird ein Regressionsmodell eingeführt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten für Forschung, welche sich nahtlos in das tägliche Leben integrieren lässt und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lässt, um neuartige Phänomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusätzliches interaktives Medium eingesetzt, welches eine freihändige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mündet die Dissertation in einer offenen Hard- und Software-Plattform für Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die für verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die Einstiegshürden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trägt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. Darüber hinaus trägt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen für Earables bei. Durch diese Forschung schließt die Dissertation die Lücke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. Darüber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten Geräte zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher Sensorfähigkeiten bietet, um Eigenschaften, Prozesse und Aktivitäten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen Fähigkeiten wird somit zunehmend realer

    INDUSTRIAL SAFETY USING AUGMENTED REALITY AND ARTIFICIAL INTELLIGENCE

    Get PDF
    Industrialization brought benefits to the development of societies, albeit at the cost of the safety of industrial workers. Industrial operators were often severely injured or lost their lives during the working process. The causes can be cuts or lacerations resulting from moving machine parts, burns or scalds resulting from touch, or mishandling of thermal, electrical, and chemical objects. Fatigue, distraction, or inattention can exacerbate the risk of industrial accidents. The accidents can cause service downtime of manufacturing machinery, leading to lower productivity and significant financial losses. Therefore, regulations and safety measures were formulated and overseen by the government and local authorities. Safety measures include effective training of workers, an inspection of the workplace, safety rules, safeguarding, and safety warning systems. For instance, safeguarding prevents contact with hazardous moving parts by isolating or stopping them, whereas a safety warning system detects accident risks and issues an alert warning. Warning systems were mostly mounted detection sensors and alerting systems. Mobile alerting devices can be gadgets such as phones, tablets, smartwatches, or smart glasses. Smart goggles can be utilized for industrial safety to protect, detect, and warn about potential risks. Adopting new technologies such as augmented reality and artificial intelligence can enhance the safety of workers in the industry. Augmented reality systems developed for head-mounted displays can extend workers’ perception of the environment. Artificial intelligence utilizing state-of-the-art sensors can improve industrial safety by making workers aware of potential hazards in the environment. For instance, thermal or infrared sensors can detect hot objects in the workplace. Built-in infrared sensors in smart glasses can detect the state of attention of users. Using smart glasses, potential hazards can be conveyed to industrial workers using various modalities, such as audial, visual, or tactile. We have successfully developed advanced safety systems for industrial workers. Our innovative approach incorporates cutting-edge technologies such as eye tracking, spatial mapping, and thermal imaging. By utilizing eye tracking, we are able to identify instances of user inattention, while spatial mapping allows us to analyze the user’s behavior and surroundings. Furthermore, the integration of thermal imaging enables us to detect hot objects within the user’s field of view. The first system we developed is a warning system that harnesses the power of augmented reality and artificial intelligence. This system effectively issues alerts and presents holographic warnings to combat instances of inattention or distraction. By utilizing visual cues and immersive technology, we aim to proactively prevent accidents and promote worker safety. The second safety system we designed involves the integration of a third-party thermal imaging system into smart glasses. Through this integration, our safety system overlays false-color holograms onto hot objects, enabling workers to easily identify and avoid potential hazards. To evaluate the effectiveness of our systems, we conducted comprehensive experiments with human participants. These experiments involved both qualitative and quantitative measurements, and we further conducted semi-structured interviews with the participants to gather their insights. The results and subsequent discussions from our experiments have provided valuable insights for the future implementation of safety systems. Through this research, we envision the continued advancement and refinement of safety technologies to further enhance worker safety in industrial settings

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Wearable and Nearable Biosensors and Systems for Healthcare

    Get PDF
    Biosensors and systems in the form of wearables and “nearables” (i.e., everyday sensorized objects with transmitting capabilities such as smartphones) are rapidly evolving for use in healthcare. Unlike conventional approaches, these technologies can enable seamless or on-demand physiological monitoring, anytime and anywhere. Such monitoring can help transform healthcare from the current reactive, one-size-fits-all, hospital-centered approach into a future proactive, personalized, decentralized structure. Wearable and nearable biosensors and systems have been made possible through integrated innovations in sensor design, electronics, data transmission, power management, and signal processing. Although much progress has been made in this field, many open challenges for the scientific community remain, especially for those applications requiring high accuracy. This book contains the 12 papers that constituted a recent Special Issue of Sensors sharing the same title. The aim of the initiative was to provide a collection of state-of-the-art investigations on wearables and nearables, in order to stimulate technological advances and the use of the technology to benefit healthcare. The topics covered by the book offer both depth and breadth pertaining to wearable and nearable technology. They include new biosensors and data transmission techniques, studies on accelerometers, signal processing, and cardiovascular monitoring, clinical applications, and validation of commercial devices
    corecore