213 research outputs found

    Analyse der Spontanmotorik im 1. Lebensjahr: Markerlose 3-D-Bewegungserfassung zur Früherkennung von Entwicklungsstörungen

    Get PDF
    Children with motor development disorders benefit greatly from early interventions. An early diagnosis in pediatric preventive care (U2–U5) can be improved by automated screening. Current approaches to automated motion analysis, however, are expensive, require lots of technical support, and cannot be used in broad clinical application. Here we present an inexpensive, marker-free video analysis tool (KineMAT) for infants, which digitizes 3‑D movements of the entire body over time allowing automated analysis in the future. Three-minute video sequences of spontaneously moving infants were recorded with a commercially available depth-imaging camera and aligned with a virtual infant body model (SMIL model). The virtual image generated allows any measurements to be carried out in 3‑D with high precision. We demonstrate seven infants with different diagnoses. A selection of possible movement parameters was quantified and aligned with diagnosis-specific movement characteristics. KineMAT and the SMIL model allow reliable, three-dimensional measurements of spontaneous activity in infants with a very low error rate. Based on machine-learning algorithms, KineMAT can be trained to automatically recognize pathological spontaneous motor skills. It is inexpensive and easy to use and can be developed into a screening tool for preventive care for children.Kinder mit motorischer Entwicklungsstörung profitieren von einer frühen Entwicklungsförderung. Eine frühe Diagnosestellung in der kinderärztlichen Vorsorge (U2–U5) kann durch ein automatisiertes Screening verbessert werden. Bisherige Ansätze einer automatisierten Bewegungsanalyse sind jedoch teuer und aufwendig und nicht in der Breite anwendbar. In diesem Beitrag soll ein neues System zur Videoanalyse, das Kinematic Motion Analysis Tool (KineMAT) vorgestellt werden. Es kann bei Säuglingen angewendet werden und kommt ohne Körpermarker aus. Die Methode wird anhand von 7 Patienten mit unterschiedlichen Diagnosen demonstriert. Mit einer kommerziell erhältlichen Tiefenbildkamera (RGB-D[Red-Green-Blue-Depth]-Kamera) werden 3‑minütige Videosequenzen von sich spontan bewegenden Säuglingen aufgenommen und mit einem virtuellen Säuglingskörpermodell (SMIL[Skinned Multi-infant Linear]-Modell) in Übereinstimmung gebracht. Das so erzeugte virtuelle Abbild erlaubt es, beliebige Messungen in 3‑D mit hoher Präzision durchzuführen. Eine Auswahl möglicher Bewegungsparameter wird mit diagnosespezifischen Bewegungsauffälligkeiten zusammengeführt. Der KineMAT und das SMIL-Modell erlauben eine zuverlässige, dreidimensionale Messung der Spontanaktivität bei Säuglingen mit einer sehr niedrigen Fehlerrate. Basierend auf maschinellen Lernalgorithmen kann der KineMAT trainiert werden, pathologische Spontanmotorik automatisiert zu erkennen. Er ist kostengünstig und einfach anzuwenden und soll als Screeninginstrument für die kinderärztliche Vorsorge weiterentwickelt werden

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    A rigged model of the breast for preoperative surgical planning

    Get PDF
    In breast surgical practice, drawing is part of the preoperative planning procedure and is essential for a successful operation. In this study, we design a pipeline to assist surgeons with patient-specific breast surgical drawings. We use a deformable torso model containing the surgical patterns to match any breast surface scan. To be compatible with surgical timing, we build an articulated model through a skinning process coupled with shape deformers to enhance a fast registration process. On one hand, the scalable bones of the skinning account for pose and morphological variations of the patients. On the other hand, pre-designed artistic blendshapes create a linear space for guaranteeing anatomical variations. Then, we apply meaningful constraints to the model to find a trade-off between precision and speed. The experiments were conducted on 7 patients, in 2 different poses (prone and supine) with a breast size ranging from 36A and 42C (US/UK bra sizing). The acquisitions were obtained using the depth camera Structure Sensor, and the breast scans were acquired in less than 1 minute. The result is a registration method converging within a few seconds (3 maximum), reaching a Mean Absolute Error of 2.3 mm for mesh registration and 8.0 mm for breast anatomical landmarks. Compared to the existing literature, our model can be personalized and does not require any database. Finally, our registered model can be used to transfer surgical reference patterns onto any patient in any position

    When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking

    Get PDF
    The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder’s, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder
    • …
    corecore