5,474 research outputs found
Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris
Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar
Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang
dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai
325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi
Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang
diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan
pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan
hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur
penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan
Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang
mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap.
Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat
hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara
minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif
sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin
tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar
dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam
diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh
itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam
membuat kajian yang akan datang
Towards Odor-Sensitive Mobile Robots
J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012
Versión preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical
ones when operating in real environments. Until now, these sensorial systems mostly relied on range
sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely
been employed, they can provide a complementary sensory information, vital for some applications, as
with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities
and also reviews some of the hurdles that are preventing smell from achieving the importance of other
sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status
on the three main fields within robotics olfaction: the classification of volatile substances, the spatial
estimation of the gas dispersion from sparse measurements, and the localization of the gas source within
a known environment
High-Precision Localization Using Ground Texture
Location-aware applications play an increasingly critical role in everyday
life. However, satellite-based localization (e.g., GPS) has limited accuracy
and can be unusable in dense urban areas and indoors. We introduce an
image-based global localization system that is accurate to a few millimeters
and performs reliable localization both indoors and outside. The key idea is to
capture and index distinctive local keypoints in ground textures. This is based
on the observation that ground textures including wood, carpet, tile, concrete,
and asphalt may look random and homogeneous, but all contain cracks, scratches,
or unique arrangements of fibers. These imperfections are persistent, and can
serve as local features. Our system incorporates a downward-facing camera to
capture the fine texture of the ground, together with an image processing
pipeline that locates the captured texture patch in a compact database
constructed offline. We demonstrate the capability of our system to robustly,
accurately, and quickly locate test images on various types of outdoor and
indoor ground surfaces
Sparse 3D Point-cloud Map Upsampling and Noise Removal as a vSLAM Post-processing Step: Experimental Evaluation
The monocular vision-based simultaneous localization and mapping (vSLAM) is
one of the most challenging problem in mobile robotics and computer vision. In
this work we study the post-processing techniques applied to sparse 3D
point-cloud maps, obtained by feature-based vSLAM algorithms. Map
post-processing is split into 2 major steps: 1) noise and outlier removal and
2) upsampling. We evaluate different combinations of known algorithms for
outlier removing and upsampling on datasets of real indoor and outdoor
environments and identify the most promising combination. We further use it to
convert a point-cloud map, obtained by the real UAV performing indoor flight to
3D voxel grid (octo-map) potentially suitable for path planning.Comment: 10 pages, 4 figures, camera-ready version of paper for "The 3rd
International Conference on Interactive Collaborative Robotics (ICR 2018)
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
- …