226 research outputs found

    Wearable and IoT technologies application for physical rehabilitation

    Get PDF
    This research consists in the development an IoT Physical Rehabilitation solution based on wearable devices, combining a set of smart gloves and smart headband for use in natural interactions with a set of VR therapeutic serious games developed on the Unity 3D gaming platform. The system permits to perform training sessions for hands and fingers motor rehabilitation. Data acquisition is performed by Arduino Nano Microcontroller computation platform with ADC connected to the analog measurement channels materialized by piezo-resistive force sensors and connected to an IMU module via I2C. Data communication is performed using the Bluetooth wireless communication protocol. The smart headband, designed to be used as a first- person-controller in game scenes, will be responsible for collecting the patient's head rotation value, this parameter will be used as the player's avatar head rotation value, approaching the user and the virtual environment in a semi-immersive way. The acquired data are stored and processed on a remote server, which will help the physiotherapist to evaluate the patients' performance around the different physical activities during a rehabilitation session, using a Mobile Application developed for the configuration of games and visualization of results. The use of serious games allows a patient with motor impairments to perform exercises in a highly interactive and non-intrusive way, based on different scenarios of Virtual Reality, contributing to increase the motivation during the rehabilitation process. The system allows to perform an unlimited number of training sessions, making possible to visualize historical values and compare the results of the different performed sessions, for objective evolution of rehabilitation outcome. Some metrics associated with upper limb exercises were also considered to characterize the patient’s movement during the session.Este trabalho de pesquisa consiste no desenvolvimento de uma solução de Reabilitação Física IoT baseada em dispositivos de vestuário, combinando um conjunto de luvas inteligentes e uma fita-de-cabeça inteligente para utilização em interações naturais com um conjunto de jogos terapêuticos sérios de Realidade Virtual desenvolvidos na plataforma de jogos Unity 3D. O sistema permite realizar sessões de treino para reabilitação motora de mãos e dedos. A aquisição de dados é realizada pela plataforma de computação Arduino utilizando um Microcontrolador Nano com ADC (Conversor Analógico-Digital) conectado aos canais de medição analógicos materializados por sensores de força piezo-resistivos e a um módulo IMU por I2C. A comunicação de dados é realizada usando o protocolo de comunicação sem fio Bluetooth. A fita-de-cabeça inteligente, projetada para ser usada como controlador de primeira pessoa nos cenários de jogo, será responsável por coletar o valor de rotação da cabeça do paciente, esse parâmetro será usado como valor de rotação da cabeça do avatar do jogador, aproximando o utilizador e o ambiente virtual de forma semi-imersiva. Os dados adquiridos são armazenados e processados num servidor remoto, o que ajudará o fisioterapeuta a avaliar o desempenho dos pacientes em diferentes atividades físicas durante uma sessão de reabilitação, utilizando uma Aplicação Móvel desenvolvido para configuração de jogos e visualização de resultados. A utilização de jogos sérios permite que um paciente com deficiências motoras realize exercícios de forma altamente interativa e não intrusiva, com base em diferentes cenários de Realidade Virtual, contribuindo para aumentar a motivação durante o processo de reabilitação. O sistema permite realizar um número ilimitado de sessões de treinamento, possibilitando visualizar valores históricos e comparar os resultados das diferentes sessões realizadas, para a evolução objetiva do resultado da reabilitação. Algumas métricas associadas aos exercícios dos membros superiores também foram consideradas para caracterizar o movimento do paciente durante a sessão

    Vision-based techniques for gait recognition

    Full text link
    Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available - for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance

    Design and Development of ReMoVES Platform for Motion and Cognitive Rehabilitation

    Get PDF
    Exergames have recently gained popularity and scientific reliability in the field of assistive computing technology for human well-being. The ReMoVES platform, developed by the author, provides motor and cognitive exergames to be performed by elderly or disabled people, in conjunction with traditional rehabilitation. Data acquisition during the exercise takes place through Microsoft Kinect, Leap Motion and touchscreen monitor. The therapist is provided with feedback on patients' activity over time in order to assess their weakness and correct inaccurate movement attitudes. This work describes the technical characteristics of the ReMoVES platform, designed to be used by multiple locations such as rehabilitation centers or the patient's home, while providing a centralized data collection server. The system includes 15 exergames, developed from scratch by the author, with the aim of promoting motor and cognitive activity through patient entertainment. The ReMoVES platform differs from similar solutions for the automatic data processing features in support of the therapist. Three methods are presented: based on classic data analysis, on Support Vector Machine classification, and finally on Recurrent Neural Networks. The results describe how it is possible to discern patient gaming sessions with adequate performance from those with incorrect movements with an accuracy of up to 92%. The system has been used with real patients and a data database is made available to the scientific community. The aim is to encourage the dissemination of such data to lay the foundations for a comparison between similar studies

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion

    Wearables for Movement Analysis in Healthcare

    Get PDF
    Quantitative movement analysis is widely used in clinical practice and research to investigate movement disorders objectively and in a complete way. Conventionally, body segment kinematic and kinetic parameters are measured in gait laboratories using marker-based optoelectronic systems, force plates, and electromyographic systems. Although movement analyses are considered accurate, the availability of specific laboratories, high costs, and dependency on trained users sometimes limit its use in clinical practice. A variety of compact wearable sensors are available today and have allowed researchers and clinicians to pursue applications in which individuals are monitored in their homes and in community settings within different fields of study, such movement analysis. Wearable sensors may thus contribute to the implementation of quantitative movement analyses even during out-patient use to reduce evaluation times and to provide objective, quantifiable data on the patients’ capabilities, unobtrusively and continuously, for clinical purposes

    Ihmisen motorisen suorituskyvyn mittaaminen

    Get PDF
    In this thesis, a novel metric for measuring human motor performance is presented. The intuition is derived from Fitts’ law, but unlike Fitts’ law, the metric is generalizable to continuous, free-form, full-body motion, which is reproducible. The applications of interest lie in human-computer interaction (HCI), kinesiology, sports science, and user authentication. For a background, the thesis presents the Fitts’ law and its use as an evaluation tool in HCI. The extensions and restrictions are briefly presented. As the human motion is captured through sensor devices of different techniques, the choice of the sensor is important as it affects the data available for the metric’s evaluation. The motion data acquisition is described with different sensor systems that have been used or tried out in the experiments. As the sensor space usually contains redundant motion information compared to the user’s inherent motion space, the information needs to be reduced through unsupervised machine learning techniques. As for preprocessing, Principal Component Analysis (PCA), Probabilistic PCA and Gaussian Processes Latent Variable Model (GP-LVM) were used for dimensionality reduction, and Canonical Time Warping (CTW) was used for temporal alignment. Throughput was then calculated using mutual information. Experimental evaluation and assessment for the metric was made. Classical ballet was used as reference data, with throughputs ranging from 213 to 590 bps using GP-LVM. Comparisons to Fitts’ law were made with a cyclical tapping task. Bimanual in-air gesturing was used to examine some well-known motorperceptual phenomena, and the metric showed responsiveness to laterality and perceptual distraction. Also, several diagnostics were made and the problems of the framework were assessed.Tämä opinnäytetyö esittelee uuden metriikan ihmisen motorisen suorituskyvyn mittaamiseksi. Intuitio on peräisin Fittsin laista, mutta toisin kuin Fittsin laki, kehitetty metriikka on yleistettävissä jatkuvaan, vapaamuotoiseen ja koko kehoa koskevaan liikkeeseen, joka on toistettavissa. Metriikan sovellukset ovat HCItieteessä, kinesiologiassa, liikuntatieteessä ja käyttäjien autentikaatiossa. Työn taustatietoina esitetään Fittsin lain teoria ja sen käyttö HCI-tieteessä. Fittsin lain laajennukset ja rajoitukset esitetään lyhyesti. Koska ihmisliikettä voidaan tallentaa erilaisilla sensoreilla ja tekniikoilla, on tallennusmetodin valinta tärkeää, koska se vaikuttaa kerätyn tiedon muotoon. Liiketiedon kerääminen kuvataan eri laitteistoilla, joita on käytetty tässä tutkimustyössä. Koska sensoritieto on yleensä redundanttia verrattuna käyttäjän liiketietoon, täytyy sensoritieto pelkistää ohjaamattomilla koneoppimistekniikoilla. Tiedon esiprosessointina käytettiin pääkomponenttianalyysiä (PCA), probabilistista PCA:ta ja gaussisten prosessien latenttia muuttujamallia (GP-LVM) pelkistämään sensoritietoa, ja kanonista aikavääristystä (CTW) käytettiin sekvenssien ajalliseen linjaukseen. Suoritusteho laskettiin keskinäisellä informaatiolla. Metriikan toimivuus arvioitiin kokeellisesti. Referenssiaineistona käytettiin klassista balettia, jonka suoritustehoksi laskettiin 213–590 bittiä per sekunti GPLVM-menetelmällä. Vertailut Fittsin lakiin tehtiin syklisellä naputuskokeella. Kaksikätistä elehtimiskoetta käytettiin tutkimaan joitakin tunnettuja motorisia hahmottamisilmiöitä, ja metriikka osoitti herkkyyttä lateralisuudelle ja hahmotushäiriöille. Työssä diagnosoitiin myös metriikan soveltamista ja ongelmia arvioitiin
    corecore