This paper presents a simple, low-cost system designed to recognize spoken commands using a piezoelectric contact microphone to control prosthetic or exoskeleton devices. Unlike traditional microphones, the contact mic detects vibrations directly from the skin near the throat, which helps minimize interference from surrounding noise and improves privacy. The captured analog signals were amplified using an LM386 module and digitized with an ESP32 microcontroller at a sampling rate of 1 kHz. Data was recorded for three classes: “OPEN,” “CLOSE,” and general noise or silence, and processed using a sliding window approach with mean-centering and light data augmentation. A 1D Convolutional Neural Network (CNN) was trained on these segments to classify the commands in real time. The model achieved a validation accuracy of up to 95.74% across multiple training sessions. Real-time classification was also implemented, displaying both the input waveform and predicted output with confidence scores. The results demonstrate that contact microphone-based speech recognition can be a practical and efficient method for hands-free control in assistive technology
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.