140 research outputs found

    Range of motion measurements based on depth camera for clinical rehabilitation

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaIn clinical rehabilitation, biofeedback increases the patient’s motivation which makes it one of the most effective motor rehabilitation mechanisms. In this field it is very helpful for the patient and even for the therapist to know the level of success and performance of the training process. The human motion tracking study can provide relevant information for this purpose. Existing lab-based Three-Dimensional (3D) motion capture systems are capable to provide this information in real-time. However, these systems still present some limitations when used in rehabilitation processes involving biofeedback. A new depth camera - the Microsoft KinectTM - was recently developed overcoming the limitations associated with the lab-based movement analysis systems. This depth camera is easy to use, inexpensive and portable. The aim of this work is to introduce a system in clinical practice to do Range of Motion(ROM) measurements, using the KinectTM sensor and providing real-time biofeedback. For this purpose, the ROM measurements were computed using the joints spatial coordinates provided by the official Microsoft KinectTM Software Development Kit (SDK)and also using our own developed algorithm. The obtained results were compared with a triaxial accelerometer data, used as reference. The upper movements studied were abduction, flexion/extension and internal/external rotation with the arm at 90 degrees of elevation. With our algorithm the Mean Error (ME) was less than 1.5 degrees for all movements. Only in abduction the KinectTM Sketelon Tracking obtained comparable data. In other movements the ME increased an order of magnitude. Given the potential benefits, our method can be a useful tool for ROM measurements in clinics

    Feature extraction of non-uniform food products using RGB and RGB-D data combined with shape models

    Get PDF
    This research paper investigates a new 3D handling process in food industry: bin picking. Machines only function effectively when the input of product is physically well organised, well structured, and consistent. At many stages in a typical production line, foodstuffs are physically arranged as they move through a machine or equipment, however, this order is then lost again as products are ejected onto conveyors, bulked together into bins for transport, taken off-line for chilled storage. Bin picking is generally not solved for manufacturing parts. Unlike food ordering processes such as pick and place operations, vibratory feeders etc., this food handling operation has not been applied to food industry neither. A new approach is presented using the Microsoft KinectTM sensor and Active Shape Models. By combining the new device that obtains an RGB and RGB-D image and the flexible shape model, it is possible to identify non-uniform food products that have a high variation in shape. The methodology of this system is presented. The experiments show the achievability of this new system

    Workflow-based Context-aware Control of Surgical Robots

    Get PDF
    Surgical assistance system such as medical robots enhanced the capabilities of medical procedures in the last decades. This work presents a new perspective on the use of workflows with surgical robots in order to improve the technical capabilities and the ease of use of such systems. This is accomplished by a 3D perception system for the supervision of the surgical operating room and a workflow-based controller, that allows to monitor the surgical process using workflow-tracking techniques

    Using a serious game to assess spatial memory in children and adults

    Get PDF
    Short-term spatial memory has traditionally been assessed using visual stimuli, but not auditory stimuli. In this paper, we design and test a serious game with auditory stimuli for assessing short-term spatial memory. The interaction is achieved by gestures (by raising your arms). The auditory stimuli are emitted by smart devices placed at different locations. A total of 70 participants (32 children and 38 adults) took part in the study. The outcomes obtained with our game were compared with traditional methods. The results indicated that the outcomes in the game for the adults were significantly greater than those obtained by the children. This result is consistent with the assumption that the ability of humans increases continuously during maturation. Correlations were found between our game and traditional methods, suggesting its validity for assessing spatial memory. The results indicate that both groups easily learn how to perform the task and are good at recalling the locations of sounds emitted from different positions. With regard to satisfaction with our game, the mean scores of the children were higher for nearly all of the questions. The mean scores for all of the questions, except one, were greater than 4 on a scale from 1 to 5. These results show the satisfaction of the participants with our game. The results suggest that our game promotes engagement and allows the assessment of spatial memory in an ecological way

    Low cost 3D scanning using off-the-shelf video gaming peripherals

    Get PDF
    Digitization of specimens is becoming an ever more important part of palaeontology, both for archival and research purposes. The advent of mainstream hardware containing depth sensors and RGB cameras, used primarily for interacting with video games, in conjunction with an open platform used by developers, has led to an abundance of highly affordable technology with which to digitize specimens. Here, the Microsoft® Kinect™ is used to digitize specimens of varying sizes in order to demonstrate the potential applications of the technology to palaeontologists. The resulting digital models are compared with models produced using photogrammetry. Although the Kinect™ generally records morphology at a lower resolution, and thus captures less detail than photogrammetric techniques, it offers advantages in speed of data acquisition, and generation of a completed mesh in real time at the point of data collection. Whilst it is therefore limited in archival applications, the ease of use and low cost, driven by strong market competition, make this technology an enticing alternative for studies where rapid digitization of general morphology is desired

    Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    Get PDF
    This article belongs to the Special Issue Sensors and Wireless Sensor Networks for Novel Concepts of Things, Interfaces and Applications in Smart SpacesRecent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Natural user interfaces for interdisciplinary design review using the Microsoft Kinect

    Get PDF
    As markets demand engineered products faster, waiting on the cyclical design processes of the past is not an option. Instead, industry is turning to concurrent design and interdisciplinary teams. When these teams collaborate, engineering CAD tools play a vital role in conceptualizing and validating designs. These tools require significant user investment to master, due to challenging interfaces and an overabundance of features. These challenges often prohibit team members from using these tools for exploring designs. This work presents a method allowing users to interact with a design using intuitive gestures and head tracking, all while keeping the model in a CAD format. Specifically, Siemens\u27 Teamcenter® Lifecycle Visualization Mockup (Mockup) was used to display design geometry while modifications were made through a set of gestures captured by a Microsoft KinectTM in real time. This proof of concept program allowed a user to rotate the scene, activate Mockup\u27s immersive menu, move the immersive wand, and manipulate the view based on head position. This work also evaluates gesture usability and task completion time for this proof of concept system. A cognitive model evaluation method was used to evaluate the premise that gesture-based user interfaces are easier to use and learn with regards to time than a traditional mouse and keyboard interface. Using a cognitive model analysis tool allowed the rapid testing of interaction concepts without the significant overhead of user studies and full development cycles. The analysis demonstrated that using the KinectTM is a feasible interaction mode for CAD/CAE programs. In addition, the analysis pointed out limitations in the gesture interfaces ability to compete time wise with easily accessible customizable menu options

    Integrated Measurement System of Postural Angle and Electromyography Signal for Manual Materials Handling Assessment

    Get PDF
    Ergonomics practitioners and engineers require an integrated measurement system which allows them to study the interaction of work posture and muscle effort in manual materials handling (MMH) tasks so that strenuous posture and muscle strain can be avoided. However, far too little attention has been paid to develop an integrated measurement system of work posture and muscle activity for assessing MMH tasks. The aim of this study was to develop and test a prototype of integrated system for measuring work posture angles and (electromyography) EMG signals of a worker who doing MMH tasks. The Microsoft Visual Studio software, a 3D camera (Microsoft Kinect), Advancer Technologies muscle sensors and a microcontroller (NI DAQ USB-6000) were applied to develop the integrated postural angle and EMG signal measurement system. Additionally, a graphical user interface was created in the system to enable users to perform work posture and muscle effort assessment simultaneously. Based on the testing results, this study concluded that the patterns of EMG signals are depending on postural angles which consistent with the findings of established works. Further study is required to enhance the validity, reliability and usability of the prototype so that it may facilitate ergonomics practitioners and engineers to assess work posture and muscle effort in MMH task

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
    corecore