13 research outputs found

    Movement Pattern Recognition in Physical Rehabilitation - Cognitive Motivation-based IT Method and Algorithms

    Get PDF
    In this paper, a solution is presented to support both existing and future movement rehabilitation applications. The presented method combines the advantages of human-computer interaction-based movement therapy, with the cognitive property of intelligent decision-making systems. With this solution, therapy could be fully adapted to the needs of the patients and conditions while maintaining a sense of success in them, thereby motivating them. In our modern digital age, the development of HCI interfaces walks together with the growth of users’ needs. The available technologies have limitations, which can reduce the effectiveness of modern input devices, such as the Kinect sensor or any other similar sensors. In this article, multiple newly developed and modified methods are introduced with the aim to overcome these limitations. These methods can fully adapt the movement pattern recognition to the users' skills. The main goals are to apply this method in movement rehabilitation, where the supervisor, a therapist can personalize the rehabilitation exercises due to the Distance Vector-based Gesture Recognition (DVGR), Reference Distance-based Synchronous/Asynchronous Movement Recognition (RDSMR/RDAMR) and the Real-Time Adaptive Movement Pattern Classification (RAMPC) methods

    The Cognitive Motivation-based APBMR Algorithm in Physical Rehabilitation

    Get PDF
    This article presents a new, alternative method of gesture recognition using the cognitive properties of intelligent decision-making systems to support the rehabilitation process of people with disabilities: the Asynchronous Prediction-Based Movement Recognition (APBMR) algorithm. The algorithm “predicts” the next movement of the user by evaluating the previous three with the goal to maintain motivation. Based on the prediction, it creates acceptance domains and decides whether the next user-input gesture can be considered the same movement. For this, the APBMR algorithm uses six mean techniques: the Arithmetic, Geometric, Harmonic, Contrahamonic, Quadratic and the Cubic ones. The purpose of this article besides presenting this new method is to evaluate which mean technique to use with the three different acceptance domains. The authors evaluated the algorithm in real-time using a general and an advanced computer, as well as they tested it by predicting from a file and also compared the algorithm to one of their earlier works. The tests were done by four groups of users, respectively, each group doing four gestures. After analyzing the results, the authors concluded that the Contraharmonic mean technique gives the best average gesture acceptance rates in the ±0.05 m and ±0.1 m acceptance domains, while the Arithmetic mean technique provides the best average gesture acceptance rate in the ±0.15 m acceptance domain when using the APBMR algorithm

    How to develop serious games for social and cognitive competence of children with learning difficulties

    Get PDF
    In this paper some experiences are described, from requirements gathering to design and implementation, in the European project 'Intelligent Serious Games for Social and Cognitive Competence'. The main goal of the project is to develop serious games for social and cognitive competence of children with learning difficulties. The aim of these games is to teach youth with mild disabilities about social skills, basic skills, key cognitive competence skills and work skills. Using these interactive mobile games and 3D simulations helps the social integration and personal development of these children and youth. The project uses serious games and 3D simulations so that teaching and learning becomes interesting, playful, attractive and efficient

    Using Analytics to Identify When Course Materials Are Accessed Relative to Online Exams during Digital Education

    No full text
    Face-to-face education has changed to blended or distance teaching due to the COVID-19 pandemic. Since education took a digital format, it can be investigated when course materials are accessed relative to online exams: are they opened before exams or during them? Therefore, four subjects were chosen for investigation at the University of Pannonia: one theoretical, one practical, and two that are both theoretical and practical were selected. Two groups of non-repeater 2nd-semester students and two groups of non-repeater 5th-semester students attended these classes. Slides were uploaded to the university’s Moodle system, while videos were uploaded to YouTube. Their analytics were used for the investigation. The analyses were conducted in five groups of days relative to the exam day. According to the results, students studied throughout the semester for the normal exam in most cases, while they studied a day before the supplementary one. For cheating, the 2nd-semester students used significantly more slides, while 5th-semester students used significantly more videos. Even with cheating, the students in their 2nd semester received significantly worse marks by 26.06% than those who were in their 5th semester

    Identification of Markers in Challenging Conditions for People with Visual Impairment Using Convolutional Neural Network

    No full text
    People with visual impairment face a lot of difficulties in their daily activities. Several researches have been conducted to find smart solutions using mobile devices to help people with visual impairment perform tasks. This paper focuses on using assistive technology to help people with visual impairment in indoor navigation using markers. The essential steps of a typical navigation system are identifying the current location, finding the shortest path to the destination, and navigating safely to the destination using navigation feedback. In this research, the authors proposed a system to help people with visual impairment in indoor navigation using markers. In this system, the authors have re-defined the identification step to a classification problem and used convolutional neural networks to identify markers. The main contributions of this paper are: (1) A system to help people with visual impairment in indoor navigation using markers. (2) Comparing QR codes with Aruco markers to prove that Aruco markers work better. (3) Convolutional neural network has been implemented and simplified to detect the candidate markers in challenging conditions and improve response time. (4) Comparing the proposed model with another model to prove that it gives better accuracy for training and testing

    Suitability of the Kinect Sensor and Leap Motion Controller—A Literature Review

    No full text
    As the need for sensors increases with the inception of virtual reality, augmented reality and mixed reality, the purpose of this paper is to evaluate the suitability of the two Kinect devices and the Leap Motion Controller. When evaluating the suitability, the authors’ focus was on the state of the art, device comparison, accuracy, precision, existing gesture recognition algorithms and on the price of the devices. The aim of this study is to give an insight whether these devices could substitute more expensive sensors in the industry or on the market. While in general the answer is yes, it is not as easy as it seems: There are significant differences between the devices, even between the two Kinects, such as different measurement ranges, error distributions on each axis and changing depth precision relative to distance

    The Effects of Display Parameters and Devices on Spatial Ability Test Times

    No full text
    The effects of display parameters and devices are examined on spatial ability test times in virtual environments. Before the investigation, completion times of 240 and 61 students were measured, using an LG desktop display and the Gear VR, respectively. The virtual environment also logged the following randomized display parameters: virtual camera type, field of view, rotation, contrast ratio, whether shadows are turned on, and the used display device. The completion times were analyzed using regression analysis methods. Except for the virtual camera type, every factor has a significant influence on the test completion times. After grouping the remaining factors into pairs, triplets, quartets, and quintets, the following can be concluded: the combination of 75° field of view, 45° camera rotation, and 3:1 contrast ratio has the largest increase in completion times with an estimate of 420.88 s—even when this combination is in-side a quartet or a quintet. Consequently, significant decreases in completion times exist up to variable quartets (the largest being −106.29 s on average), however, the significance disappears among variable quintets. The occurrences of factors were also investigated: an undefined field of view, a 0° camera rotation, the Gear VR, a 7:1 contrast ratio, and turned-on shadows are the factors that occur in most significant combinations. These are the factors that often and significantly influence completion times
    corecore