8 research outputs found

    Dynamic hand gesture recognition based on 3D pattern assembled trajectories

    Get PDF
    International audienceOver the past few years, advances in commercial 3D sensors have substantially promoted the research of dynamic hand gesture recognition. On a other side, whole body gestures recognition has also attracted increasing attention since the emergence of Kinect like sensors. One may notice that both research topics deal with human-made motions and are likely to face similar challenges. In this paper, our aim is thus to evaluate the applicability of an action recognition feature-set to model dynamic hand gestures using skeleton data. Furthermore, existing datasets are often composed of pre-segmented gestures that are performed with a single hand only. We collected therefore a more challenging dataset, which contains unsegmented streams of 13 hand gesture classes, performed with either a single hand or two hands. Our approach is first evaluated on an existing dataset, namely DHG dataset, and then using our collected dataset. Better results compared to previous approaches are reported

    深度センサーを用いた脳卒中後の運動機能の自動評価システム開発に関する研究

    Get PDF
    脳卒中後の身体機能の評価に用いられる脳卒中機能評価法(SIAS) は,臨床において,特殊な器具を使わずに判定できる言語ルールベースの基準を用いて評価する。ところが,その基準は,曖昧な言語表現で書かれており,目測による評価であることから,その判定が曖昧になり,恣意性が含まれがちになる。そのため,同一の被験者に対して異なる結果が生じることもあり,それを回避するためには,定量的に評価するシステムが必要である。実際,モーションキャプチャーシステムを使って定量的な評価の試みもなされているが,利用者や使用者の負担が大きく,日常的に臨床で用いるのは困難である。臨床において,理学療法士による曖昧性を含む言語的ルールに基づいた評価判定方法は,すでに日常的に使用されており,自動的に,一意的に評価するような新たなシステムに置き換えることは困難である。よって,システムを構築するにあたり,理学療法士一人ひとりが,目視による計測・判定に近づけるために調整を行うことのできるパラメータを具備する必要がある。そこで,本研究では,安価な深度センサー類,特にKinect やLeapMotion を用いて,日常的に用いられる新たなSIAS の定量的な評価システムを構築する。ここで開発するSIAS の評価システムについて,(1) Kinect の関節検知機能,(2) LeapMotion の関節検知機能,を用いるものと,(3) 深度画像から身体の特徴部位等を検出するアルゴリズムを作りこみ,それを用いて評価するものに分ける。(1) では,麻痺側運動機能に含まれる膝口テスト,股関節屈曲テスト,膝関節伸展テスト,足パットテスト,(2) では,手指テスト,視空間認知検査,(3) では,体幹機能に含まれる腹筋力テストと垂直性テスト,関節可動域測定に含まれる肩関節と足関節について,角度計測値を用いた評価システムが含まれる。このシステムを用いて,身体に問題のない若年成人者を対象に実験を行い,また,実際の対象者となる高齢者や片麻痺者に対する実証実験も同時に行った。その結果,本システムと理学療法士とが評価した結果を比較したところ,高い一致率を示した。理学療法士をはじめとする専門家が各々の判定基準を持ちながら評価する際にも,本研究で開発したシステムを併用して,本システムが出力する数値データによる評価の裏付けが可能である。このことから,本システムは新たに提案できる定量的な評価システムのプロトタイプになると考える。Stroke Impairment Assessment Set (SIAS) is used to evaluate bodily function after stroke. In daily clinical treatment, SIAS is evaluated by using fuzzy linguistic rules without special equipment, which tends to include personal arbitrariness. This may lead to different results among physical therapists for a same client. A quantitative evaluation systems are required to avoid difference evaluations between testers. In fact, motion capture systems have been applied for quantitative measurement methods, but it costs expensive and forces troublesome tasks to clients and operators.SIAS is conducted with linguistic fuzzy rules, thus it is difficult to change it to automatic unified evaluation methods even if it exists. Therefore, it is necessary to develop systems with changeable parameters to adjust each physical therapist. In this study, a new quantitative evaluation system is developed by using low-cost portable depth sensors such as Kinect and Leap Motion for SIAS.Here, systems with three categories are developed: (1) Kinect applications using body joint detection function, (2) Leap Motion applications for finger detection, and (3) depth sensor applications by finding the feature of the body shape that cannot be detected properly by the joint detection. In (1), algorithms for testing paralysis motor functions are developed including knee mouth test, hip flexion test, knee extension test and foot pat test by using joint detected function. In (2), for finger test and visuospatial test systems are developed. In (3), evaluation systems are developed for trunk function inspection in abdominal test and vertical tests, and range of motion inspection in shoulder and ankle joints, by using depth data.Experimental study conducted with healthy young persons, elderly persons and hemiplegia persons. The results of experiments show that the measurement values and the judgements were very similar between the results by this system and physical therapists. Even when the SIAS is tested by physical therapists as the traditional way, it is possible for this system to supply various data to strengthen the judgement. In this way, this prototype can also be applied to various quantitative evaluation methods other than SIAS.甲南大学令和元年度(2019年度

    Dynamic hand gesture recognition based on 3D pattern assembled trajectories

    Get PDF
    International audienceOver the past few years, advances in commercial 3D sensors have substantially promoted the research of dynamic hand gesture recognition. On a other side, whole body gestures recognition has also attracted increasing attention since the emergence of Kinect like sensors. One may notice that both research topics deal with human-made motions and are likely to face similar challenges. In this paper, our aim is thus to evaluate the applicability of an action recognition feature-set to model dynamic hand gestures using skeleton data. Furthermore, existing datasets are often composed of pre-segmented gestures that are performed with a single hand only. We collected therefore a more challenging dataset, which contains unsegmented streams of 13 hand gesture classes, performed with either a single hand or two hands. Our approach is first evaluated on an existing dataset, namely DHG dataset, and then using our collected dataset. Better results compared to previous approaches are reported

    Deep Recurrent Networks for Gesture Recognition and Synthesis

    Get PDF
    It is hard to overstate the importance of gesture-based interfaces in many applications nowadays. The adoption of such interfaces stems from the opportunities they create for incorporating natural and fluid user interactions. This highlights the importance of having gesture recognizers that are not only accurate but also easy to adopt. The ever-growing popularity of machine learning has prompted many application developers to integrate automatic methods of recognition into their products. On the one hand, deep learning often tops the list of the most powerful and robust recognizers. These methods have been consistently shown to outperform all other machine learning methods in a variety of tasks. On the other hand, deep networks can be overwhelming to use for a majority of developers, requiring a lot of tuning and tweaking to work as expected. Additionally, these networks are infamous for their requirement for large amounts of training data, further hampering their adoption in scenarios where labeled data is limited. In this dissertation, we aim to bridge the gap between the power of deep learning methods and their adoption into gesture recognition workflows. To this end, we introduce two deep network models for recognition. These models are similar in spirit, but target different application domains: one is designed for segmented gesture recognition, while the other is suitable for continuous data, tackling segmentation and recognition problems simultaneously. The distinguishing characteristic of these networks is their simplicity, small number of free parameters, and their use of common building blocks that come standard with any modern deep learning framework, making them easy to implement, train and adopt. Through evaluations, we show that our proposed models achieve state-of-the-art results in various recognition tasks and application domains spanning different input devices and interaction modalities. We demonstrate that the infamy of deep networks due to their demand for powerful hardware as well as large amounts of data is an unfair assessment. On the contrary, we show that in the absence of such data, our proposed models can be quickly trained while achieving competitive recognition accuracy. Next, we explore the problem of synthetic gesture generation: a measure often taken to address the shortage of labeled data. We extend our proposed recognition models and demonstrate that the same models can be used in a Generative Adversarial Network (GAN) architecture for synthetic gesture generation. Specifically, we show that our original recognizer can be used as the discriminator in such frameworks, while its slightly modified version can act as the gesture generator. We then formulate a novel loss function for our gesture generator, which entirely replaces the need for a discriminator network in our generative model, thereby significantly reducing the complexity of our framework. Through evaluations, we show that our model is able to improve the recognition accuracy of multiple recognizers across a variety of datasets. Through user studies, we additionally show that human evaluators mistake our synthetic samples with the real ones frequently indicating that our synthetic samples are visually realistic. Additional resources for this dissertation (such as demo videos and public source codes) are available at https://www.maghoumi.com/dissertatio

    Irish Machine Vision and Image Processing Conference, Proceedings

    Get PDF
    corecore