38 research outputs found

    Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera

    Get PDF
    Hand movements play an essential role in a person’s ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation. The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA. First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10°. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten. These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces

    Human activity recognition-based path planning for autonomous vehicles

    Get PDF
    Human activity recognition (HAR) is a wide research topic in a field of computer science. Improving HAR can lead to massive breakthrough in humanoid robotics, robots used in medicine and in the field of autonomous vehicles. The system that is able to recognise human and its activity without any errors and anomalies would lead to safer and more empathetic autonomous systems. During this research work, multiple neural networks models, with different complexity, are being investigated. Each model is re-trained on the proposed unique data set, gathered on automated guided vehicle (AGV) with the latest and the modest sensors used commonly on autonomous vehicles. The best model is picked out based on the final accuracy for action recognition. Best models pipeline is fused with YOLOv3, to enhance the human detection. In addition to pipeline improvement, multiple action direction estimation methods are proposed. © 2020, Springer-Verlag London Ltd., part of Springer Nature

    Motion and emotion estimation for robotic autism intervention.

    Get PDF
    Robots have recently emerged as a novel approach to treating autism spectrum disorder (ASD). A robot can be programmed to interact with children with ASD in order to reinforce positive social skills in a non-threatening environment. In prior work, robots were employed in interaction sessions with ASD children, but their sensory and learning abilities were limited, while a human therapist was heavily involved in “puppeteering” the robot. The objective of this work is to create the next-generation autism robot that includes several new interactive and decision-making capabilities that are not found in prior technology. Two of the main features that this robot would need to have is the ability to quantitatively estimate the patient’s motion performance and to correctly classify their emotions. This would allow for the potential diagnosis of autism and the ability to help autistic patients practice their skills. Therefore, in this thesis, we engineered components for a human-robot interaction system and confirmed them in experiments with the robots Baxter and Zeno, the sensors Empatica E4 and Kinect, and, finally, the open-source pose estimation software OpenPose. The Empatica E4 wristband is a wearable device that collects physiological measurements in real time from a test subject. Measurements were collected from ASD patients during human-robot interaction activities. Using this data and labels of attentiveness from a trained coder, a classifier was developed that provides a prediction of the patient’s level of engagement. The classifier outputs this prediction to a robot or supervising adult, allowing for decisions during intervention activities to keep the attention of the patient with autism. The CMU Perceptual Computing Lab’s OpenPose software package enables body, face, and hand tracking using an RGB camera (e.g., web camera) or an RGB-D camera (e.g., Microsoft Kinect). Integrating OpenPose with a robot allows the robot to collect information on user motion intent and perform motion imitation. In this work, we developed such a teleoperation interface with the Baxter robot. Finally, a novel algorithm, called Segment-based Online Dynamic Time Warping (SoDTW), and metric are proposed to help in the diagnosis of ASD. Social Robot Zeno, a childlike robot developed by Hanson Robotics, was used to test this algorithm and metric. Using the proposed algorithm, it is possible to classify a subject’s motion into different speeds or to use the resulting SoDTW score to evaluate the subject’s abilities

    Isesõitvate autode tee planeerimine baseerudes inimese tegevuse tuvastamisele

    Get PDF
    Human activity recognition (HAR) is wide research topic in a field of computer science. Improving HAR can lead to massive breakthrough in humanoid robotics, robots used in medicine and in the field of autonomous vehicles. The system that is able to recognise human and its activity without any errors and anomalies, would lead to safer and more empathetic autonomous systems. During this thesis multiple neural networks models, with different complexity, are being investigated. Each model is re-trained on the proposed unique data set, gathered on automated guided vehicle (AGV) with the latest and the modest sensors used commonly on autonomous vehicles. The best model is picked out based on the final accuracy for action recognition. Best models pipeline is fused with YOLOv3, to enhance the human detection. In addition to pipeline improvement, multiple action direction estimation methods are proposed. The action estimation of the human is very important aspect for self-driving car collision free path planning

    Application of Computer Vision and Mobile Systems in Education: A Systematic Review

    Get PDF
    The computer vision industry has experienced a significant surge in growth, resulting in numerous promising breakthroughs in computer intelligence. The present review paper outlines the advantages and potential future implications of utilizing this technology in education. A total of 84 research publications have been thoroughly scrutinized and analyzed. The study revealed that computer vision technology integrated with a mobile application is exceptionally useful in monitoring students’ perceptions and mitigating academic dishonesty. Additionally, it facilitates the digitization of handwritten scripts for plagiarism detection and automates attendance tracking to optimize valuable classroom time. Furthermore, several potential applications of computer vision technology for educational institutions have been proposed to enhance students’ learning processes in various faculties, such as engineering, medical science, and others. Moreover, the technology can also aid in creating a safer campus environment by automatically detecting abnormal activities such as ragging, bullying, and harassment

    LoRA-like Calibration for Multimodal Deception Detection using ATSFace Data

    Full text link
    Recently, deception detection on human videos is an eye-catching techniques and can serve lots applications. AI model in this domain demonstrates the high accuracy, but AI tends to be a non-interpretable black box. We introduce an attention-aware neural network addressing challenges inherent in video data and deception dynamics. This model, through its continuous assessment of visual, audio, and text features, pinpoints deceptive cues. We employ a multimodal fusion strategy that enhances accuracy; our approach yields a 92\% accuracy rate on a real-life trial dataset. Most important of all, the model indicates the attention focus in the videos, providing valuable insights on deception cues. Hence, our method adeptly detects deceit and elucidates the underlying process. We further enriched our study with an experiment involving students answering questions either truthfully or deceitfully, resulting in a new dataset of 309 video clips, named ATSFace. Using this, we also introduced a calibration method, which is inspired by Low-Rank Adaptation (LoRA), to refine individual-based deception detection accuracy.Comment: 10 pages, 9 figure

    Sistema de detección del ángulo articular en los movimientos de miembro superior para evaluación en fisioterapia mediante visión artificial.

    Get PDF
    Determinar los ángulos de movilidad funcional articular en miembro superior en la flexo-extensión de codo y abducción y aducción de hombro para evaluación de terapia física en pacientes del Centro de Rehabilitación de la Universidad Técnica del Norte mediante un algoritmo de estimación de posición de articulaciones por visión artificial en 2DEl presente trabajo está enfocado en el desarrollo de un sistema de medición de ángulos articulares en miembros superiores para evaluación en fisioterapia mediante visión artificial con el uso de un algoritmo de estimación de pose humana el cual ubica los puntos articulares formando extremidades en un esquema en 2D del paciente mostrado en video y en tiempo real. El sistema detecta los puntos articulares de hombro y codo que permite al usuario evaluar los diferentes arcos de movimiento de los miembros superiores en los planos corporales (frontal y sagital), obteniendo así una lectura clara de los rangos de movimiento del usuario en tiempo real, así como una base de datos que respalda la evaluación del paciente. Las pruebas de funcionamiento del sistema se realizaron bajo la supervisión de un experto en el área de fisioterapia, lograron determinar validez y confiabilidad del mismo, las pruebas fueron realizadas dentro del Centro de Rehabilitación de la Universidad Técnica del Norte, contando con la participación de 9 pacientes entre estudiantes y administrativos de diferente edad, estatura y patología. Los resultados obtenidos en las pruebas de funcionamiento tienen coherencia objetiva demostrando que el sistema es válido y confiable con un porcentaje del 92,60%, luego de comparación entre las mediciones realizadas manualmente con un goniómetro y las realizadas por el sistema de medición de ángulos articulares planteado que permitió obtener datos en tiempo real facilitando la evaluación de los pacientes.Ingenierí
    corecore