Walking-assistive devices require adaptive control methods to ensure smooth
transitions between various modes of locomotion. For this purpose, detecting
human locomotion modes (e.g., level walking or stair ascent) in advance is
crucial for improving the intelligence and transparency of such robotic
systems. This study proposes Deep-STF, a unified end-to-end deep learning model
designed for integrated feature extraction in spatial, temporal, and frequency
dimensions from surface electromyography (sEMG) signals. Our model enables
accurate and robust continuous prediction of nine locomotion modes and 15
transitions at varying prediction time intervals, ranging from 100 to 500 ms.
In addition, we introduced the concept of 'stable prediction time' as a
distinct metric to quantify prediction efficiency. This term refers to the
duration during which consistent and accurate predictions of mode transitions
are made, measured from the time of the fifth correct prediction to the
occurrence of the critical event leading to the task transition. This
distinction between stable prediction time and prediction time is vital as it
underscores our focus on the precision and reliability of mode transition
predictions. Experimental results showcased Deep-STP's cutting-edge prediction
performance across diverse locomotion modes and transitions, relying solely on
sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other
machine learning techniques, achieving an outstanding average prediction
accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy
only marginally decreased to 93.00%. The averaged stable prediction times for
detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the
100-500 ms time advances.Comment: 10 pages,7 figure