8 research outputs found

    RECOGNITION OF SPORTS EXERCISES USING INERTIAL SENSOR TECHNOLOGY

    Get PDF
    Supervised learning as a sub-discipline of machine learning enables the recognition of correlations between input variables (features) and associated outputs (classes) and the application of these to previously unknown data sets. In addition to typical areas of application such as speech and image recognition, fields of applications are also being developed in the sports and fitness sector. The purpose of this work was to implement a workflow for the automated recognition of sports exercises in the Matlab® programming environment and to carry out a comparison of different model structures. First, the acquisition of the sensor signals provided in the local network and their processing were implemented. The functionalities to be realised included the interpolation of lossy time series, the labelling of the activity intervals performed and, in part, the generation of sliding windows with statistical parameters. The preprocessed data were used for the training of classifiers and artificial neural networks (ANN). These were iteratively optimised in their corresponding hyper parameters for the data structure to be learned. The most reliable models were finally trained with an increased data set, validated and compared with regard to the achieved performance. In addition to the usual evaluation metrics such as F1 score and accuracy, the temporal behaviour of the assignments was also displayed graphically, which enabled statements to be made about potential causes for incorrect assignments. In this context, especially the transition areas between the classes were detected as erroneous assignments as well as exercises with insufficient or clearly deviating execution. The best overall accuracy achieved with ANN and the increased dataset was 93.7 %

    Increasing Performance of Multiclass Ensemble Gradient Boost uses Newton-Raphson Parameter in Physical Activity Classifying

    Get PDF
    The sophistication of smartphones with various sensors they have can be used to recognize human physical activity by placing the smartphone on the human body. Classification of human activities, the best performance is obtained when using machine learning methods, while statistical methods such as logistic regression give poor results. However, the weakness of the logistic regression method in classifying human activities is corrected by using the ensemble technique. This paper proposes to apply the Multiclass Ensemble Gradient Boost technique to improve the performance of the Logistic Regression classification in classifying human activities such as walking, running, climbing stairs, and descending stairs. The results show that the Multiclass Ensemble Gradient Boost Classifier by Estimating the Newton-Raphson Parameter succeeded in improving the performance of logistic regression in terms of accuracy by 29.11%

    A new system to detect coronavirus social distance violation

    Get PDF
    In this paper, a novel solution to avoid new infections is presented. Instead of tracing users’ locations, the presence of individuals is detected by analysing the voices, and people’s faces are detected by the camera. To do this, two different Android applications were implemented. The first one uses the camera to detect people’s faces whenever the user answers or performs a phone call. Firebase Platform will be used to detect faces captured by the camera and determine its size and estimate their distance to the phone terminal. The second application uses voice biometrics to differentiate the users’ voice from unknown speakers and creates a neural network model based on 5 samples of the user’s voice. This feature will only be activated whenever the user is surfing the Internet or using other applications to prevent undesired contacts. Currently, the patient’s tracking is performed by geolocation or by using Bluetooth connection. Although face detection and voice recognition are existing methods, this paper aims to use them and integrate both in a single device. Our application cannot violate privacy since it does not save the data used to carry out the detection and does not associate this data to people

    A collaborative healthcare framework for shared healthcare plan with ambient intelligence

    Get PDF
    The fast propagation of the Internet of Things (IoT) devices has driven to the development of collaborative healthcare frameworks to support the next generation healthcare industry for quality medical healthcare. This paper presents a generalized collaborative framework named collaborative shared healthcare plan (CSHCP) for cognitive health and fitness assessment of people using ambient intelligent application and machine learning techniques. CSHCP provides support for daily physical activity recognition, monitoring, assessment and generate a shared healthcare plan based on collaboration among different stakeholders: doctors, patient guardians, as well as close community circles. The proposed framework shows promising outcomes compared to the existing studies. Furthermore, the proposed framework enhances team communication, coordination, long-term plan management of healthcare information to provide a more efficient and reliable shared healthcare plans to people

    Analyzing the Effectiveness and Contribution of Each Axis of Tri-Axial Accelerometer Sensor for Accurate Activity Recognition

    No full text
    Recognizing human physical activities from streaming smartphone sensor readings is essential for the successful realization of a smart environment. Physical activity recognition is one of the active research topics to provide users the adaptive services using smart devices. Existing physical activity recognition methods lack in providing fast and accurate recognition of activities. This paper proposes an approach to recognize physical activities using only2-axes of the smartphone accelerometer sensor. It also investigates the effectiveness and contribution of each axis of the accelerometer in the recognition of physical activities. To implement our approach, data of daily life activities are collected labeled using the accelerometer from 12 participants. Furthermore, three machine learning classifiers are implemented to train the model on the collected dataset and in predicting the activities. Our proposed approach provides more promising results compared to the existing techniques and presents a strong rationale behind the effectiveness and contribution of each axis of an accelerometer for activity recognition. To ensure the reliability of the model, we evaluate the proposed approach and observations on standard publicly available dataset WISDM also and provide a comparative analysis with state-of-the-art studies. The proposed approach achieved 93% weighted accuracy with Multilayer Perceptron (MLP) classifier, which is almost 13% higher than the existing methods

    Artificial Intelligence for Cognitive Health Assessment: State-of-the-Art, Open Challenges and Future Directions

    Get PDF
    The subjectivity and inaccuracy of in-clinic Cognitive Health Assessments (CHA) have led many researchers to explore ways to automate the process to make it more objective and to facilitate the needs of the healthcare industry. Artificial Intelligence (AI) and machine learning (ML) have emerged as the most promising approaches to automate the CHA process. In this paper, we explore the background of CHA and delve into the extensive research recently undertaken in this domain to provide a comprehensive survey of the state-of-the-art. In particular, a careful selection of significant works published in the literature is reviewed to elaborate a range of enabling technologies and AI/ML techniques used for CHA, including conventional supervised and unsupervised machine learning, deep learning, reinforcement learning, natural language processing, and image processing techniques. Furthermore, we provide an overview of various means of data acquisition and the benchmark datasets. Finally, we discuss open issues and challenges in using AI and ML for CHA along with some possible solutions. In summary, this paper presents CHA tools, lists various data acquisition methods for CHA, provides technological advancements, presents the usage of AI for CHA, and open issues, challenges in the CHA domain. We hope this first-of-its-kind survey paper will significantly contribute to identifying research gaps in the complex and rapidly evolving interdisciplinary mental health field

    Human Activity Recognition (HAR) Using Wearable Sensors and Machine Learning

    Get PDF
    Humans engage in a wide range of simple and complex activities. Human Activity Recognition (HAR) is typically a classification problem in computer vision and pattern recognition, to recognize various human activities. Recent technological advancements, the miniaturization of electronic devices, and the deployment of cheaper and faster data networks have propelled environments augmented with contextual and real-time information, such as smart homes and smart cities. These context-aware environments, alongside smart wearable sensors, have opened the door to numerous opportunities for adding value and personalized services to citizens. Vision-based and sensory-based HAR find diverse applications in healthcare, surveillance, sports, event analysis, Human-Computer Interaction (HCI), rehabilitation engineering, occupational science, among others, resulting in significantly improved human safety and quality of life. Despite being an active research area for decades, HAR still faces challenges in terms of gesture complexity, computational cost on small devices, and energy consumption, as well as data annotation limitations. In this research, we investigate methods to sufficiently characterize and recognize complex human activities, with the aim to improving recognition accuracy, reducing computational cost and energy consumption, and creating a research-grade sensor data repository to advance research and collaboration. This research examines the feasibility of detecting natural human gestures in common daily activities. Specifically, we utilize smartwatch accelerometer sensor data and structured local context attributes and apply AI algorithms to determine the complex gesture activities of medication-taking, smoking, and eating. This dissertation is centered around modeling human activity and the application of machine learning techniques to implement automated detection of specific activities using accelerometer data from smartwatches. Our work stands out as the first in modeling human activity based on wearable sensors with a linguistic representation of grammar and syntax to derive clear semantics of complex activities whose alphabet comprises atomic activities. We apply machine learning to learn and predict complex human activities. We demonstrate the use of one of our unified models to recognize two activities using smartwatch: medication-taking and smoking. Another major part of this dissertation addresses the problem of HAR activity misalignment through edge-based computing at data origination points, leading to improved rapid data annotation, albeit with assumptions of subject fidelity in demarcating gesture start and end sections. Lastly, the dissertation describes a theoretical framework for the implementation of a library of shareable human activities. The results of this work can be applied in the implementation of a rich portal of usable human activity models, easily installable in handheld mobile devices such as phones or smart wearables to assist human agents in discerning daily living activities. This is akin to a social media of human gestures or capability models. The goal of such a framework is to domesticate the power of HAR into the hands of everyday users, as well as democratize the service to the public by enabling persons of special skills to share their skills or abilities through downloadable usable trained models
    corecore