66 research outputs found

    Hand pose recognition using a consumer depth camera

    Get PDF
    [no abstract

    Automatic Segmentation of Pressure Images Acquired in a Clinical Setting

    Get PDF
    One of the major obstacles to pressure ulcer research is the difficulty in accurately measuring mechanical loading of specific anatomical sites. A human motion analysis system capable of automatically segmenting a patient\u27s body into high-risk areas can greatly improve the ability of researchers and clinicians to understand how pressure ulcers develop in a hospital environment. This project has developed automated computational methods and algorithms to analyze pressure images acquired in a hospital setting. The algorithm achieved 99% overall accuracy for the classification of pressure images into three pose classes (left lateral, supine, and right lateral). An applied kinematic model estimated the overall pose of the patient. The algorithm accuracy depended on the body site, with the sacrum, left trochanter, and right trochanter achieving an accuracy of 87-93%. This project reliably segments pressure images into high-risk regions of interest

    Nonlinear probabilistic estimation of 3-D geometry from images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (p. 159-164).by Ali Jerome Azarbayejani.Ph.D

    Human Activity Recognition and Control of Wearable Robots

    Get PDF
    abstract: Wearable robotics has gained huge popularity in recent years due to its wide applications in rehabilitation, military, and industrial fields. The weakness of the skeletal muscles in the aging population and neurological injuries such as stroke and spinal cord injuries seriously limit the abilities of these individuals to perform daily activities. Therefore, there is an increasing attention in the development of wearable robots to assist the elderly and patients with disabilities for motion assistance and rehabilitation. In military and industrial sectors, wearable robots can increase the productivity of workers and soldiers. It is important for the wearable robots to maintain smooth interaction with the user while evolving in complex environments with minimum effort from the user. Therefore, the recognition of the user's activities such as walking or jogging in real time becomes essential to provide appropriate assistance based on the activity. This dissertation proposes two real-time human activity recognition algorithms intelligent fuzzy inference (IFI) algorithm and Amplitude omega (AωA \omega) algorithm to identify the human activities, i.e., stationary and locomotion activities. The IFI algorithm uses knee angle and ground contact forces (GCFs) measurements from four inertial measurement units (IMUs) and a pair of smart shoes. Whereas, the AωA \omega algorithm is based on thigh angle measurements from a single IMU. This dissertation also attempts to address the problem of online tuning of virtual impedance for an assistive robot based on real-time gait and activity measurement data to personalize the assistance for different users. An automatic impedance tuning (AIT) approach is presented for a knee assistive device (KAD) in which the IFI algorithm is used for real-time activity measurements. This dissertation also proposes an adaptive oscillator method known as amplitude omega adaptive oscillator (AωAOA\omega AO) method for HeSA (hip exoskeleton for superior augmentation) to provide bilateral hip assistance during human locomotion activities. The AωA \omega algorithm is integrated into the adaptive oscillator method to make the approach robust for different locomotion activities. Experiments are performed on healthy subjects to validate the efficacy of the human activities recognition algorithms and control strategies proposed in this dissertation. Both the activity recognition algorithms exhibited higher classification accuracy with less update time. The results of AIT demonstrated that the KAD assistive torque was smoother and EMG signal of Vastus Medialis is reduced, compared to constant impedance and finite state machine approaches. The AωAOA\omega AO method showed real-time learning of the locomotion activities signals for three healthy subjects while wearing HeSA. To understand the influence of the assistive devices on the inherent dynamic gait stability of the human, stability analysis is performed. For this, the stability metrics derived from dynamical systems theory are used to evaluate unilateral knee assistance applied to the healthy participants.Dissertation/ThesisDoctoral Dissertation Aerospace Engineering 201

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Mitigating the effect of covariates in face recognition

    Get PDF
    Current face recognition systems capture faces of cooperative individuals in controlled environment as part of the face recognition process. It is therefore possible to control lighting, pose, background, and quality of images. However, in a real world application, we have to deal with both ideal and imperfect data. Performance of current face recognition systems is affected for such non-ideal and challenging cases. This research focuses on designing algorithms to mitigate the effect of covariates in face recognition.;To address the challenge of facial aging, an age transformation algorithm is proposed that registers two face images and minimizes the aging variations. Unlike the conventional method, the gallery face image is transformed with respect to the probe face image and facial features are extracted from the registered gallery and probe face images. The variations due to disguises cause change in visual perception, alter actual data, make pertinent facial information disappear, mask features to varying degrees, or introduce extraneous artifacts in the face image. To recognize face images with variations due to age progression and disguises, a granular face verification approach is designed which uses dynamic feed-forward neural architecture to extract 2D log polar Gabor phase features at different granularity levels. The granular levels provide non-disjoint spatial information which is combined using the proposed likelihood ratio based Support Vector Machine match score fusion algorithm. The face verification algorithm is validated using five face databases including the Notre Dame face database, FG-Net face database and three disguise face databases.;The information in visible spectrum images is compromised due to improper illumination whereas infrared images provide invariance to illumination and expression. A multispectral face image fusion algorithm is proposed to address the variations in illumination. The Support Vector Machine based image fusion algorithm learns the properties of the multispectral face images at different resolution and granularity levels to determine optimal information and combines them to generate a fused image. Experiments on the Equinox and Notre Dame multispectral face databases show that the proposed algorithm outperforms existing algorithms. We next propose a face mosaicing algorithm to address the challenge due to pose variations. The mosaicing algorithm generates a composite face image during enrollment using the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a users face image. Experiments conducted on three different databases indicate that face mosaicing offers significant benefits by accounting for the pose variations that are commonly observed in face images.;Finally, the concept of online learning is introduced to address the problem of classifier re-training and update. A learning scheme for Support Vector Machine is designed to train the classifier in online mode. This enables the classifier to update the decision hyperplane in order to account for the newly enrolled subjects. On a heterogeneous near infrared face database, the case study using Principal Component Analysis and C2 feature algorithms shows that the proposed online classifier significantly improves the verification performance both in terms of accuracy and computational time

    Machine Learning for Biomedical Application

    Get PDF
    Biomedicine is a multidisciplinary branch of medical science that consists of many scientific disciplines, e.g., biology, biotechnology, bioinformatics, and genetics; moreover, it covers various medical specialties. In recent years, this field of science has developed rapidly. This means that a large amount of data has been generated, due to (among other reasons) the processing, analysis, and recognition of a wide range of biomedical signals and images obtained through increasingly advanced medical imaging devices. The analysis of these data requires the use of advanced IT methods, which include those related to the use of artificial intelligence, and in particular machine learning. It is a summary of the Special Issue “Machine Learning for Biomedical Application”, briefly outlining selected applications of machine learning in the processing, analysis, and recognition of biomedical data, mostly regarding biosignals and medical images

    High-quality face capture, animation and editing from monocular video

    Get PDF
    Digitization of virtual faces in movies requires complex capture setups and extensive manual work to produce superb animations and video-realistic editing. This thesis pushes the boundaries of the digitization pipeline by proposing automatic algorithms for high-quality 3D face capture and animation, as well as photo-realistic face editing. These algorithms reconstruct and modify faces in 2D videos recorded in uncontrolled scenarios and illumination. In particular, advances in three main areas offer solutions for the lack of depth and overall uncertainty in video recordings. First, contributions in capture include model-based reconstruction of detailed, dynamic 3D geometry that exploits optical and shading cues, multilayer parametric reconstruction of accurate 3D models in unconstrained setups based on inverse rendering, and regression-based 3D lip shape enhancement from high-quality data. Second, advances in animation are video-based face reenactment based on robust appearance metrics and temporal clustering, performance-driven retargeting of detailed facial models in sync with audio, and the automatic creation of personalized controllable 3D rigs. Finally, advances in plausible photo-realistic editing are dense face albedo capture and mouth interior synthesis using image warping and 3D teeth proxies. High-quality results attained on challenging application scenarios confirm the contributions and show great potential for the automatic creation of photo-realistic 3D faces.Die Digitalisierung von Gesichtern zum Einsatz in der Filmindustrie erfordert komplizierte Aufnahmevorrichtungen und die manuelle Nachbearbeitung von Rekonstruktionen, um perfekte Animationen und realistische Videobearbeitung zu erzielen. Diese Dissertation erweitert vorhandene Digitalisierungsverfahren durch die Erforschung von automatischen Verfahren zur qualitativ hochwertigen 3D Rekonstruktion, Animation und Modifikation von Gesichtern. Diese Algorithmen erlauben es, Gesichter in 2D Videos, die unter allgemeinen Bedingungen und unbekannten Beleuchtungsverhältnissen aufgenommen wurden, zu rekonstruieren und zu modifizieren. Vor allem Fortschritte in den folgenden drei Hauptbereichen tragen zur Kompensation von fehlender Tiefeninformation und der allgemeinen Mehrdeutigkeit von 2D Videoaufnahmen bei. Erstens, Beiträge zur modellbasierten Rekonstruktion von detaillierter und dynamischer 3D Geometrie durch optische Merkmale und die Shading-Eigenschaften des Gesichts, mehrschichtige parametrische Rekonstruktion von exakten 3D Modellen mittels inversen Renderings in allgemeinen Szenen und regressionsbasierter 3D Lippenformverfeinerung mittels qualitativ hochwertigen Daten. Zweitens, Fortschritte im Bereich der Computeranimation durch videobasierte Gesichtsausdrucksübertragung und temporaler Clusterbildung, Übertragung von detaillierten Gesichtsmodellen, deren Mundbewegung mit Ton synchronisiert ist, und die automatische Erstellung von personalisierten "3D Face Rigs". Schließlich werden Fortschritte im Bereich der realistischen Videobearbeitung vorgestellt, welche auf der dichten Rekonstruktion von Hautreflektionseigenschaften und der Mundinnenraumsynthese mittels bildbasierten und geometriebasierten Verfahren aufbauen. Qualitativ hochwertige Ergebnisse in anspruchsvollen Anwendungen untermauern die Wichtigkeit der geleisteten Beiträgen und zeigen das große Potential der automatischen Erstellung von realistischen digitalen 3D Gesichtern auf

    Stochastic optimization and interactive machine learning for human motion analysis

    Get PDF
    The analysis of human motion from visual data is a central issue in the computer vision research community as it enables a wide range of applications and it still remains a challenging problem when dealing with unconstrained scenarios and general conditions. Human motion analysis is used in the entertainment industry for movies or videogame production, in medical applications for rehabilitation or biomechanical studies. It is also used for human computer interaction in any kind of environment, and moreover, it is used for big data analysis from social networks such as Youtube or Flickr, to mention some of its use cases. In this thesis we have studied human motion analysis techniques with a focus on its application for smart room environments. That is, we have studied methods that will support the analysis of people behavior in the room, allowing interaction with computers in a natural manner and in general, methods that introduce computers in human activity environments to enable new kind of services but in an unobstrusive mode. The thesis is structured in two parts, where we study the problem of 3D pose estimation from multiple views and the recognition of gestures using range sensors. First, we propose a generic framework for hierarchically layered particle filtering (HPF) specially suited for motion capture tasks. Human motion capture problem generally involve tracking or optimization of high-dimensional state vectors where also one have to deal with multi-modal pdfs. HPF allow to overcome the problem by means of multiple passes through substate space variables. Then, based on the HPF framework, we propose a method to estimate the anthropometry of the subject, which at the end allows to obtain a human body model adjusted to the subject. Moreover, we introduce a new weighting function strategy for approximate partitioning of observations and a method that employs body part detections to improve particle propagation and weight evaluation, both integrated within the HPF framework. The second part of this thesis is centered in the detection of gestures, and we have focused the problem of reducing annotation and training efforts required to train a specific gesture. In order to reduce the efforts required to train a gesture detector, we propose a solution based on online random forests that allows training in real-time, while receiving new data in sequence. The main aspect that makes the solution effective is the method we propose to collect the hard negatives examples while training the forests. The method uses the detector trained up to the current frame to test on that frame, and then collects samples based on the response of the detector such that they will be more relevant for training. In this manner, training is more effective in terms of the number of annotated frames required.L'anàlisi del moviment humà a partir de dades visuals és un tema central en la recerca en visió per computador, per una banda perquè habilita un ampli espectre d'aplicacions i per altra perquè encara és un problema no resolt quan és aplicat en escenaris no controlats. L'analisi del moviment humà s'utilitza a l'indústria de l'entreteniment per la producció de pel·lícules i videojocs, en aplicacions mèdiques per rehabilitació o per estudis bio-mecànics. També s'utilitza en el camp de la interacció amb computadors o també per l'analisi de grans volums de dades de xarxes socials com Youtube o Flickr, per mencionar alguns exemples. En aquesta tesi s'han estudiat tècniques per l'anàlisi de moviment humà enfocant la seva aplicació en entorns de sales intel·ligents. És a dir, s'ha enfocat a mètodes que puguin permetre l'anàlisi del comportament de les persones a la sala, que permetin la interacció amb els dispositius d'una manera natural i, en general, mètodes que incorporin les computadores en espais on hi ha activitat de persones, per habilitar nous serveis de manera que no interfereixin en la activitat. A la primera part, es proposa un marc genèric per l'ús de filtres de partícules jeràrquics (HPF) especialment adequat per tasques de captura de moviment humà. La captura de moviment humà generalment implica seguiment i optimització de vectors d'estat de molt alta dimensió on a la vegada també s'han de tractar pdf's multi-modals. Els HPF permeten tractar aquest problema mitjançant multiples passades en subdivisions del vector d'estat. Basant-nos en el marc dels HPF, es proposa un mètode per estimar l'antropometria del subjecte, que a la vegada permet obtenir un model acurat del subjecte. També proposem dos nous mètodes per la captura de moviment humà. Per una banda, el APO es basa en una nova estratègia per les funcions de cost basada en la partició de les observacions. Per altra, el DD-HPF utilitza deteccions de parts del cos per millorar la propagació de partícules i l'avaluació de pesos. Ambdós mètodes són integrats dins el marc dels HPF. La segona part de la tesi es centra en la detecció de gestos, i s'ha enfocat en el problema de reduir els esforços d'anotació i entrenament requerits per entrenar un detector per un gest concret. Per tal de reduir els esforços requerits per entrenar un detector de gestos, proposem una solució basada en online random forests que permet l'entrenament en temps real, mentre es reben noves dades sequencialment. El principal aspecte que fa la solució efectiva és el mètode que proposem per obtenir mostres negatives rellevants, mentre s'entrenen els arbres de decisió. El mètode utilitza el detector entrenat fins al moment per recollir mostres basades en la resposta del detector, de manera que siguin més rellevants per l'entrenament. D'aquesta manera l'entrenament és més efectiu pel que fa al nombre de mostres anotades que es requereixen

    BIOMETRIC TECHNOLOGIES FOR AMBIENT INTELLIGENCE

    Get PDF
    Il termine Ambient Intelligence (AmI) si riferisce a un ambiente in grado di riconoscere e rispondere alla presenza di diversi individui in modo trasparente, non intrusivo e spesso invisibile. In questo tipo di ambiente, le persone sono circondate da interfacce uomo macchina intuitive e integrate in oggetti di ogni tipo. Gli scopi dell\u2019AmI sono quelli di fornire un supporto ai servizi efficiente e di facile utilizzo per accrescere le potenzialit\ue0 degli individui e migliorare l\u2019interazioni uomo-macchina. Le tecnologie di AmI possono essere impiegate in contesti come uffici (smart offices), case (smart homes), ospedali (smart hospitals) e citt\ue0 (smart cities). Negli scenari di AmI, i sistemi biometrici rappresentano tecnologie abilitanti al fine di progettare servizi personalizzati per individui e gruppi di persone. La biometria \ue8 la scienza che si occupa di stabilire l\u2019identit\ue0 di una persona o di una classe di persone in base agli attributi fisici o comportamentali dell\u2019individuo. Le applicazioni tipiche dei sistemi biometrici includono: controlli di sicurezza, controllo delle frontiere, controllo fisico dell\u2019accesso e autenticazione per dispositivi elettronici. Negli scenari basati su AmI, le tecnologie biometriche devono funzionare in condizioni non controllate e meno vincolate rispetto ai sistemi biometrici comunemente impiegati. Inoltre, in numerosi scenari applicativi, potrebbe essere necessario utilizzare tecniche in grado di funzionare in modo nascosto e non cooperativo. In questo tipo di applicazioni, i campioni biometrici spesso presentano una bassa qualit\ue0 e i metodi di riconoscimento biometrici allo stato dell\u2019arte potrebbero ottenere prestazioni non soddisfacenti. \uc8 possibile distinguere due modi per migliorare l\u2019applicabilit\ue0 e la diffusione delle tecnologie biometriche negli scenari basati su AmI. Il primo modo consiste nel progettare tecnologie biometriche innovative che siano in grado di funzionare in modo robusto con campioni acquisiti in condizioni non ideali e in presenza di rumore. Il secondo modo consiste nel progettare approcci biometrici multimodali innovativi, in grado di sfruttare a proprio vantaggi tutti i sensori posizionati in un ambiente generico, al fine di ottenere un\u2019elevata accuratezza del riconoscimento ed effettuare autenticazioni continue o periodiche in modo non intrusivo. Il primo obiettivo di questa tesi \ue8 la progettazione di sistemi biometrici innovativi e scarsamente vincolati in grado di migliorare, rispetto allo stato dell\u2019arte attuale, la qualit\ue0 delle tecniche di interazione uomo-macchine in diversi scenari applicativi basati su AmI. Il secondo obiettivo riguarda la progettazione di approcci innovativi per migliorare l\u2019applicabilit\ue0 e l\u2019integrazione di tecnologie biometriche eterogenee negli scenari che utilizzano AmI. In particolare, questa tesi considera le tecnologie biometriche basate su impronte digitali, volto, voce e sistemi multimodali. Questa tesi presenta le seguenti ricerche innovative: \u2022 un metodo per il riconoscimento del parlatore tramite la voce in applicazioni che usano AmI; \u2022 un metodo per la stima dell\u2019et\ue0 dell\u2019individuo da campioni acquisiti in condizioni non-ideali nell\u2019ambito di scenari basati su AmI; \u2022 un metodo per accrescere l\u2019accuratezza del riconoscimento biometrico in modo protettivo della privacy e basato sulla normalizzazione degli score biometrici tramite l\u2019analisi di gruppi di campioni simili tra loro; \u2022 un approccio per la fusione biometrica multimodale indipendente dalla tecnologia utilizzata, in grado di combinare tratti biometrici eterogenei in scenari basati su AmI; \u2022 un approccio per l\u2019autenticazione continua multimodale in applicazioni che usano AmI. Le tecnologie biometriche innovative progettate e descritte in questa tesi sono state validate utilizzando diversi dataset biometrici (sia pubblici che acquisiti in laboratorio), i quali simulano le condizioni che si possono verificare in applicazioni di AmI. I risultati ottenuti hanno dimostrato la realizzabilit\ue0 degli approcci studiati e hanno mostrato che i metodi progettati aumentano l\u2019accuratezza, l\u2019applicabilit\ue0 e l\u2019usabilit\ue0 delle tecnologie biometriche rispetto allo stato dell\u2019arte negli scenari basati su AmI.Ambient Intelligence (AmI) refers to an environment capable of recognizing and responding to the presence of different individuals in a seamless, unobtrusive and often invisible way. In this environment, people are surrounded by intelligent intuitive interfaces that are embedded in all kinds of objects. The goals of AmI are to provide greater user-friendliness, more efficient services support, user-empowerment, and support for human interactions. Examples of AmI scenarios are smart cities, smart homes, smart offices, and smart hospitals. In AmI, biometric technologies represent enabling technologies to design personalized services for individuals or groups of people. Biometrics is the science of establishing the identity of an individual or a class of people based on the physical, or behavioral attributes of the person. Common applications include: security checks, border controls, access control to physical places, and authentication to electronic devices. In AmI, biometric technologies should work in uncontrolled and less-constrained conditions with respect to traditional biometric technologies. Furthermore, in many application scenarios, it could be required to adopt covert and non-cooperative technologies. In these non-ideal conditions, the biometric samples frequently present poor quality, and state-of-the-art biometric technologies can obtain unsatisfactory performance. There are two possible ways to improve the applicability and diffusion of biometric technologies in AmI. The first one consists in designing novel biometric technologies robust to samples acquire in noisy and non-ideal conditions. The second one consists in designing novel multimodal biometric approaches that are able to take advantage from all the sensors placed in a generic environment in order to achieve high recognition accuracy and to permit to perform continuous or periodic authentications in an unobtrusive manner. The first goal of this thesis is to design innovative less-constrained biometric systems, which are able to improve the quality of the human-machine interaction in different AmI environments with respect to the state-of-the-art technologies. The second goal is to design novel approaches to improve the applicability and integration of heterogeneous biometric systems in AmI. In particular, the thesis considers technologies based on fingerprint, face, voice, and multimodal biometrics. This thesis presents the following innovative research studies: \u2022 a method for text-independent speaker identification in AmI applications; \u2022 a method for age estimation from non-ideal samples acquired in AmI scenarios; \u2022 a privacy-compliant cohort normalization technique to increase the accuracy of already deployed biometric systems; \u2022 a technology-independent multimodal fusion approach to combine heterogeneous traits in AmI scenarios; \u2022 a multimodal continuous authentication approach for AmI applications. The designed novel biometric technologies have been tested on different biometric datasets (both public and collected in our laboratory) simulating the acquisitions performed in AmI applications. Results proved the feasibility of the studied approaches and shown that the studied methods effectively increased the accuracy, applicability, and usability of biometric technologies in AmI with respect to the state-of-the-art
    corecore