3,210 research outputs found
Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris
Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar
Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang
dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai
325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi
Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang
diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan
pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan
hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur
penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan
Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang
mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap.
Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat
hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara
minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif
sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin
tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar
dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam
diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh
itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam
membuat kajian yang akan datang
SLAM for Visually Impaired People: A Survey
In recent decades, several assistive technologies for visually impaired and
blind (VIB) people have been developed to improve their ability to navigate
independently and safely. At the same time, simultaneous localization and
mapping (SLAM) techniques have become sufficiently robust and efficient to be
adopted in the development of assistive technologies. In this paper, we first
report the results of an anonymous survey conducted with VIB people to
understand their experience and needs; we focus on digital assistive
technologies that help them with indoor and outdoor navigation. Then, we
present a literature review of assistive technologies based on SLAM. We discuss
proposed approaches and indicate their pros and cons. We conclude by presenting
future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure
An indoor navigation architecture using variable data sources for blind and visually impaired persons
Contrary to outdoor positioning and navigation
systems, there isn’t a counterpart global solution for indoor
environments. Usually, the deployment of an indoor positioning
system must be adapted case by case, according to the
infrastructure and the objective of the localization. A particularly
delicate case is related with persons who are blind or visually
impaired. A robust and easy to use indoor navigation solution
would be extremely useful, but this would also be particularly
difficult to develop, given the special requirements of the system
that would have to be more accurate and user friendly than a
general solution. This paper presents a contribute to this subject,
by proposing a hybrid indoor positioning system adaptable to the
surrounding indoor structure, and dealing with different types of
signals to increase accuracy. This would permit lower the
deployment costs, since it could be done gradually, beginning
with the likely existing Wi-Fi infrastructure to get a fairy
accuracy up to a high accuracy using visual tags and NFC tags
when necessary and possible.info:eu-repo/semantics/publishedVersio
Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments
Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu
Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind
Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones
Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning
Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment
Exploring the Use of Wearables to Enable Indoor Navigation for Blind Users
One of the challenges that people with visual impairments (VI) have to have to confront daily, is navigating independently through foreign or unfamiliar spaces.Navigating through unfamiliar spaces without assistance is very time consuming and leads to lower mobility. Especially in the case of indoor environments where the use of GPS is impossible, this task becomes even harder.However, advancements in mobile and wearable computing pave the path to new cheap assistive technologies that can make the lives of people with VI easier.Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. Smart watches and heads up displays (HUDs), in combination with smartphones, can provide a basis for development of advanced algorithms, capable of providing inexpensive solutions for navigation in indoor spaces. New interfaces are also introduced making the interaction between users who are blind and mo-bile devices more intuitive.This work presents a set of new systems and technologies created to help users with VI navigate indoor environments. The first system presented is an indoor navigation system for people with VI that operates by using sensors found in mo-bile devices and virtual maps of the environment. The second system presented helps users navigate large open spaces with minimum veering. Next a study is conducted to determine the accuracy of pedometry based on different body placements of the accelerometer sensors. Finally, a gesture detection system is introduced that helps communication between the user and mobile devices by using sensors in wearable devices
A comparative study in real-time scene sonification for visually impaired people
In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications
- …