2,838 research outputs found
Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris
Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar
Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang
dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai
325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi
Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang
diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan
pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan
hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur
penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan
Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang
mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap.
Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat
hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara
minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif
sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin
tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar
dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam
diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh
itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam
membuat kajian yang akan datang
Human robot interaction in a crowded environment
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3].
Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from peopleâs movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robotâs interaction, it may approach the person.
Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration
The ability to track surgical instruments in realtime is crucial for autonomous Robotic Assisted Surgery (RAS). Recently, the fusion of visual and kinematic data has been proposed to track surgical instruments. However, these methods assume that both sensors are equally reliable, and cannot successfully handle cases where there are significant perturbations in one of the sensors' data. In this paper, we address this problem by proposing an enhanced fusion-based method. The main advantage of our method is that it can adjust fusion weights to adapt to sensor perturbations and failures. Another problem is that before performing an autonomous task, these robots have to be repetitively recalibrated by a human for each new patient to estimate the transformations between the different robotic arms. To address this problem, we propose a self-calibration algorithm that empowers the robot to autonomously calibrate the transformations by itself in the beginning of the surgery. We applied our fusion and selfcalibration algorithms for autonomous ultrasound tissue scanning and we showed that the robot achieved stable ultrasound imaging when using our method. Our performance evaluation shows that our proposed method outperforms the state-of-art both in normal and challenging situations
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
A surgical system for automatic registration, stiffness mapping and dynamic image overlay
In this paper we develop a surgical system using the da Vinci research kit
(dVRK) that is capable of autonomously searching for tumors and dynamically
displaying the tumor location using augmented reality. Such a system has the
potential to quickly reveal the location and shape of tumors and visually
overlay that information to reduce the cognitive overload of the surgeon. We
believe that our approach is one of the first to incorporate state-of-the-art
methods in registration, force sensing and tumor localization into a unified
surgical system. First, the preoperative model is registered to the
intra-operative scene using a Bingham distribution-based filtering approach. An
active level set estimation is then used to find the location and the shape of
the tumors. We use a recently developed miniature force sensor to perform the
palpation. The estimated stiffness map is then dynamically overlaid onto the
registered preoperative model of the organ. We demonstrate the efficacy of our
system by performing experiments on phantom prostate models with embedded stiff
inclusions.Comment: International Symposium on Medical Robotics (ISMR 2018
Image-guided Landmark-based Localization and Mapping with LiDAR
Mobile robots must be able to determine their position to operate effectively in diverse
environments. The presented work proposes a system that integrates LiDAR and camera sensors
and utilizes the YOLO object detection model to identify objects in the robot's surroundings. The
system, developed in ROS, groups detected objects into triangles, utilizing them as landmarks to
determine the robot's position. A triangulation algorithm is employed to obtain the robot's position,
which generates a set of nonlinear equations that are solved using the Levenberg-Marquardt
algorithm.
The presented work comprehensively discusses the proposed system's study, design, and
implementation. The investigation begins with an overview of current SLAM techniques. Next, the
system design considers the requirements for localization and mapping tasks and an analysis
comparing the proposed approach to the contemporary SLAM methods. Finally, we evaluate the
system's effectiveness and accuracy through experimentation in the Gazebo simulation
environment, which allows for controlling various disturbances that a real scenario can introduce
Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras
We propose a new method to estimate the 6-dof trajectory of a flying object
such as a quadrotor UAV within a 3D airspace monitored using multiple fixed
ground cameras. It is based on a new structure from motion formulation for the
3D reconstruction of a single moving point with known motion dynamics. Our main
contribution is a new bundle adjustment procedure which in addition to
optimizing the camera poses, regularizes the point trajectory using a prior
based on motion dynamics (or specifically flight dynamics). Furthermore, we can
infer the underlying control input sent to the UAV's autopilot that determined
its flight trajectory.
Our method requires neither perfect single-view tracking nor appearance
matching across views. For robustness, we allow the tracker to generate
multiple detections per frame in each video. The true detections and the data
association across videos is estimated using robust multi-view triangulation
and subsequently refined during our bundle adjustment procedure. Quantitative
evaluation on simulated data and experiments on real videos from indoor and
outdoor scenes demonstrates the effectiveness of our method
- âŠ