This paper introduces a novel system design leveraging Vuzix Blade 2 smart glasses to enhance the mobility and independence of visually impaired individuals. The study critically examines existing assistive navigation and object detection technologies, identifying their limitations and gaps. The designed system integrates real-time object detection, distance estimation, and OCR, providing auditory feedback through a robust, efficient pipeline. The designed application enhances the independence and safety of visually impaired individuals, particularly in navigating university campuses. A dataset comprising 15,951 annotated images from the university campus was used for training and evaluation. A comparative analysis of three YOLOv8 models (YOLOv8-N, YOLOv8-S, and YOLOv8-M) was conducted, balancing accuracy and computational efficiency to optimise system performance. The pipeline offers a scalable framework for inclusive AR and AI-based assistive systems. Results show high object detection accuracy (precision: 0.90, recall: 0.83). Distance estimation performance was validated using a geometric size–based calculation that relates pixel width to calibrated focal length and known real-world object dimensions, achieving an average absolute error of 0.33 m. Results demonstrate the system’s capability to detect obstacles within 1 meter, provide precise distance estimation, and convert text into speech, validating its potential for real-world applications. This study emphasises the significant role of AI-driven solutions in advancing assistive technologies, paving the way for more accessible and inclusive navigation systems. Compared with recent assistive systems such as Smart Cane, OrCam MyEye, and IrisVision), the proposed system demonstrates superior integration of detection, text recognition, and real-time feedback within a lightweight wearable device
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.