4 research outputs found

    FAKTOR YANG BERHUBUNGAN DENGAN GANNGUANMUSKULOSKELETAL PADA CLEANING SERVICEDI RSUP DR.WAHIDIN SUDIROHUSODOMAKASSAR

    Get PDF
    ABSTRAK\ud Gangguan muskuloskeletalmerupakan gangguan yang paling sering ditemukan pada hampir semua jenis pekerjaan baik ringan, sedang maupun berat. Penelitian ini bertujuan Untuk mengetahui faktor-faktor yang berhubungan dengan gangguan muskuloskeletal pada cleaning service di RSUP dr. Wahidin Sudirohusodo Makassar tahun 2013. Jenis penelitian yang digunakan adalah penelitian observasi analitik dengan pendekatan pontong lintang (Cross Sectional Study).teknik penarikan sampel diambil dengan metode exhaustive sampling.Hasil penelitian menunjukkan bahwa angka kejadian muskuloskeletal berat pada cleaning service di RSUP Dr. Wahidin Sudirohusodo Makassar Tahun 2013 adalah sebesar 49,1% dan ringan sebesar 50,9%. Berdasarkan variabel umur (p = 0,000 < 0,05), jenis kelamin (p =0,051 < 0,05), , masa kerja (p = 0,000 < 0,05) dan sikap kerja (p = 0,000 < 0,05) menunjukkan bahwa adanya hubungan dengan gangguan muskuloskeletal, sedangkan lama kerja (p = 0,686 > 0,05) menunjukkan bahwa tidak ada hubungan dengan gangguan muskuloskeletal.Sikap kerja yang ergonomis dalam melakukan pekerjaaan harus diperhatikan agar dapat mengurangi gangguan muskuloskeletal baik ringan maupun berat, Tenaga kerja yang yang tergolong dalam kelompok tua serta yang masa kerjanya > 3 tahun sebaiknya memperhatikan kesegaran jasmaninya

    Robust CNN architecture for classification of reach and grasp actions from neural correlates: an edge device perspective

    Get PDF
    Brain-computer interfaces (BCIs) systems traditionally use machine learning (ML) algorithms that require extensive signal processing and feature extraction. Deep learning (DL)-based convolutional neural networks (CNNs) recently achieved state-of-the-art electroencephalogram (EEG) signal classification accuracy. CNN models are complex and computationally intensive, making them difficult to port to edge devices for mobile and efficient BCI systems. For addressing the problem, a lightweight CNN architecture for efficient EEG signal classification is proposed. In the proposed model, a combination of a convolution layer for spatial feature extraction from the signal and a separable convolution layer to extract spatial features from each channel. For evaluation, the performance of the proposed model along with the other three models from the literature referred to as EEGNet, DeepConvNet, and EffNet on two different embedded devices, the Nvidia Jetson Xavier NX and Jetson Nano. The results of the Multivariant 2-way ANOVA (MANOVA) show a significant difference between the accuracies of ML and the proposed model. In a comparison of DL models, the proposed models, EEGNet, DeepConvNet, and EffNet, achieved 92.44 ± 4.30, 90.76 ± 4.06, 92.89 ± 4.23, and 81.69 ± 4.22 average accuracy with standard deviation, respectively. In terms of inference time, the proposed model performs better as compared to other models on both the Nvidia Jetson Xavier NX and Jetson Nano, achieving 1.9 sec and 16.1 sec, respectively. In the case of power consumption, the proposed model shows significant values on MANOVA (p < 0.05) on Jetson Nano and Xavier. Results show that the proposed model provides improved classification results with less power consumption and inference time on embedded platforms

    Eye and Voice-Controlled Human Machine Interface System for Wheelchairs Using Image Gradient Approach

    Get PDF
    © 2020 The Author(s). This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Rehabilitative mobility aids are being used extensively for physically impaired people. Efforts are being made to develop human machine interfaces (HMIs), manipulating the biosignals to better control the electromechanical mobility aids, especially the wheelchairs. Creating precise control commands such as move forward, left, right, backward and stop, via biosignals, in an appropriate HMI is the actual challenge, as the people with a high level of disability (quadriplegia and paralysis, etc.) are unable to drive conventional wheelchairs. Therefore, a novel system driven by optical signals addressing the needs of such a physically impaired population is introduced in this paper. The present system is divided into two parts: the first part comprises of detection of eyeball movements together with the processing of the optical signal, and the second part encompasses the mechanical assembly module, i.e., control of the wheelchair through motor driving circuitry. A web camera is used to capture real-time images. The processor used is Raspberry-Pi with Linux operating system. In order to make the system more congenial and reliable, the voice-controlled mode is incorporated in the wheelchair. To appraise the system’s performance, a basic wheelchair skill test (WST) is carried out. Basic skills like movement on plain and rough surfaces in forward, reverse direction and turning capability were analyzed for easier comparison with other existing wheelchair setups on the bases of controlling mechanisms, compatibility, design models, and usability in diverse conditions. System successfully operates with average response time of 3 s for eye and 3.4 s for voice control mode.Peer reviewedFinal Published versio
    corecore