519 research outputs found

    Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris

    Get PDF
    Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai 325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap. Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam membuat kajian yang akan datang

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Multi-sensor fusion for automated guided vehicle positioning

    Get PDF
    This thesis presents positioning system of Automated Guided Vehicles or AGV for short, which is a mobile robot that follows wire or magnetic tape in the floor to navigate from point to another in workspace. AGV serves in industrial fields to convey materials and products around the manufacturing facility or warehouse thus, time of manufacturing process and number of labors can be reduced accordingly. In contrast, the limitation of its movement specified by the guidance path considered as a main weakness. In order to make the AGV moves freely without guidance path, it is essential to know current position first before starts navigate to target place then, the position has to be updating during movement. For mobile robots positioning and path tracking, two basic techniques are usually used, relative and absolute positioning. Relative positioning techniques based on measuring travelled distance by the robot and accumulate it to its initial position to estimate current position, which lead to drift error over time. Digital compass, Global Positioning System (GPS), and landmarks based positioning are examples of absolute positioning techniques, in which robot position estimated from single reading. Absolute positioning does not have drift error but the system cost is high and has signal blockage inside buildings as in case of landmarks and GPS respectively. The developed positioning system based on odometry, accelerometer, and digital compass for path tracking. RFID landmarks installed in predefined positions and ultrasonic GPS used to eliminate drift error in position estimated from odometry and accelerometer. Radio frequency module is used to transfer sensors reading from the mobile robot to a host PC has software program written on LabVIEW, which has a positioning algorithm and graphical display for robot position. The experiments conducted have illustrated that the developed sensor fusion positioning system can be integrated with AGV to replace the ordinary guidance system. It will give AGV flexibility in task manipulation in industrial application

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    A Cognitive Approach to Mobile Robot Environment Mapping and Path Planning

    Get PDF
    This thesis presents a novel neurophysiological based navigation system which uses less memory and power than other neurophysiological based systems, as well as traditional navigation systems performing similar tasks. This is accomplished by emulating the rodent’s specialized navigation and spatial awareness brain cells, as found in and around the hippocampus and entorhinal cortex, at a higher level of abstraction than previously used neural representations. Specifically, the focus of this research will be on replicating place cells, boundary cells, head direction cells, and grid cells using data structures and logic driven by each cell’s interpreted behavior. This method is used along with a unique multimodal source model for place cell activation to create a cognitive map. Path planning is performed by using a combination of Euclidean distance path checking, goal memory, and the A* algorithm. Localization is accomplished using simple, low power sensors, such as a camera, ultrasonic sensors, motor encoders and a gyroscope. The place code data structures are initialized as the mobile robot finds goal locations and other unique locations, and are then linked as paths between goal locations, as goals are found during exploration. The place code creates a hybrid cognitive map of metric and topological data. In doing so, much less memory is needed to represent the robot’s roaming environment, as compared to traditional mapping methods, such as occupancy grids. A comparison of the memory and processing savings are presented, as well as to the functional similarities of our design to the rodent’ specialized navigation cells

    Applications of Intelligent Vision in Low-Cost Mobile Robots

    Get PDF
    With the development of intelligent information technology, we have entered an era of 5G and AI. Mobile robots embody both of these technologies, and as such play an important role in future developments. However, the development of perception vision in consumer-grade low-cost mobile robots is still in its infancies. With the popularity of edge computing technology in the future, high-performance vision perception algorithms are expected to be deployed on low-power edge computing chips. Within the context of low-cost mobile robotic solutions, a robot intelligent vision system is studied and developed in this thesis. The thesis proposes and designs the overall framework of the higher-level intelligent vision system. The core system includes automatic robot navigation and obstacle object detection. The core algorithm deployments are implemented through a low-power embedded platform. The thesis analyzes and investigates deep learning neural network algorithms for obstacle object detection in intelligent vision systems. By comparing a variety of open source object detection neural networks on high performance hardware platforms, combining the constraints of hardware platform, a suitable neural network algorithm is selected. The thesis combines the characteristics and constraints of the low-power hardware platform to further optimize the selected neural network. It introduces the minimize mean square error (MMSE) and the moving average minmax algorithms in the quantization process to reduce the accuracy loss of the quantized model. The results show that the optimized neural network achieves a 20-fold improvement in inference performance on the RK3399PRO hardware platform compared to the original network. The thesis concludes with the application of the above modules and systems to a higher-level intelligent vision system for a low-cost disinfection robot, and further optimization is done for the hardware platform. The test results show that while achieving the basic service functions, the robot can accurately identify the obstacles ahead and locate and navigate in real time, which greatly enhances the perception function of the low-cost mobile robot

    Analysis of GPS and UWB positioning system for athlete tracking

    Get PDF
    In recent years, wearable performance monitoring systems have become increasingly popular in competitive sports. Wearable devices can provide vital information including distance covered, velocity, change of direction, and acceleration, which can be used to improve athlete performance and prevent injuries. Tracking technology that monitors the movement of an athlete is an important element of sport wearable devices. For tracking, the cheapest option is to use global positioning system (GPS) data however, their large margins of error are a major concern in many sports. Consequently, indoor positioning systems (IPS) have become popular in sports in recent years where the ultra-wideband (UWB) positioning sensor is now being used for tracking. IPS promises much higher accuracy, but unlike GPS, it requires a longer set-up time and its costs are significantly more. In this research, we investigate the suitability of the UWB-based localisation technique for wearable sports performance monitoring systems. We implemented a hardware set-up for both positioning sensors, UWB and the GPS-based (both 10 Hz and 1 Hz) localisation systems, and then monitored their accuracy in 2D and 3D side-by-side for the sport of tennis. Our gathered data shows a major drawback in the UWB-based localisation system. To address this major drawback we introduce an artificial intelligent model, which shows some promising results

    Real-time Simultaneous Localization And Mapping Of Mobile Robots

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu çalışmanın amacı çeşitli algılayıcılara sahip mobil robot ile kapalı, bilinmeyen ortamların haritasını çıkarmak ve aynı zamanda robotun kendini bulunduğu ortam içinde konumlandırmasıdır. Yapılan çalışmada robotun çevre ile olan etkileşimi kızılötesi ve ultrasonik algılayıcılar ile sağlanmaktadır. Ultrasonik algılayıcılar ucuz ve başarılı bir algılayıcı tipi olmasının yanında, yapısından kaynaklanan problemlerden dolayı çalışması zor olan algılayıcı tiplerinden biridir. Yapılan çalışmalar sırasında bu problemlerin en az seviyeye indirilmesi sağlanmıştır. Kızılötesi algılayıcılar ise yakın mesafeden yaptıkları doğru ölçümlerden dolayı çarpışma önleyici güvenlik sistemi amaçlı kullanılmıştır. Ortam haritasının çıkarılmasında ultrasonik mesafe ölçerler ve dijital pusula kullanılmıştır. Bununla birlikte robotun konumunun takip edilebilmesi için robotun üzerinde enkoderli motorlar kullanılmıştır. Robotun konumlandırılması ve harita çıkarma doğruluğu büyük ölçüde tasarımda kullanılan algılayıcı ve eyleyicilere bağlıdır. Algılayıcı ve eyleyicilerin seçiminde boyutları, doğrulukları ve mikroişlemci ile olan arayüzleri dikkate alınmıştır. Algılayıcılar tarafından ölçülen veriler mikroişlemci tarafından alınıp işlenmekte ve daha karmaşık hesaplama, bilgi depolama, konumlandırma, harita çıkarma işlemi yapacak olan bilgisayara kablosuz RF iletişimi ile aktarılmaktadır.The aim of this study is localization and mapping of the unknown indoor environments using mobile robot that have various sensors. The mobile robot provides interaction with the surroundings by using infrared and ultrasonic sensors. The ultrasonic sensors are cheap and successful but also they have some problem arise from the structure of them. These problems are reduced to the lower level during the study. Infrared sensors perform accurate measurements from the closer range therefore they are used for collision avoidance security purposes. Environment mapping is generated by using ultrasonic range finders and digital compass. In addition to this, to observe the localization of the robot, motors with encoders are used. Localization of the robot and accuracy of mapping are mostly related to used sensors and actuators of the robot design. The selection of the sensors and the actuators are considered according to their sizes, accuracies, interfaces to the microprocessor. Data measured by the sensors that is received and processed at the microprocessor. Then, data processed by the microprocessor is sent to the remote computer via RF communication for the complicated computation, data storage, localization and generating map.Yüksek LisansM.Sc

    The high frequency flexural ultrasonic transducer for transmitting and receiving ultrasound in air

    Get PDF
    Flexural ultrasonic transducers are robust and low cost sensors that are typically used in industry for distance ranging, proximity sensing and flow measurement. The operating frequencies of currently available commercial flexural ultrasonic transducers are usually below 50 kHz. Higher operating frequencies would be particularly beneficial for measurement accuracy and detection sensitivity. In this paper, design principles of High Frequency Flexural Ultrasonic Transducers (HiFFUTs), guided by the classical plate theory and finite element analysis, are reported. The results show that the diameter of the piezoelectric disc element attached to the flexing plate of the HiFFUT has a significant influence on the transducer's resonant frequency, and that an optimal diameter for a HiFFUT transmitter alone is different from that for a pitch-catch ultrasonic system consisting of both a HiFFUT transmitter and a receiver. By adopting an optimal piezoelectric diameter, the HiFFUT pitch-catch system can produce an ultrasonic signal amplitude greater than that of a non-optimised system by an order of magnitude. The performance of a prototype HiFFUT is characterised through electrical impedance analysis, laser Doppler vibrometry, and pressure-field microphone measurement, before the performance of two new HiFFUTs in a pitch-catch configuration is compared with that of commercial transducers. The prototype HiFFUT can operate efficiently at a frequency of 102.1 kHz as either a transmitter or a receiver, with comparable output amplitude, wider bandwidth, and higher directivity than commercially available transducers of similar construction
    corecore