8,546 research outputs found
Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris
Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar
Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang
dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai
325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi
Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang
diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan
pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan
hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur
penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan
Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang
mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap.
Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat
hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara
minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif
sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin
tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar
dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam
diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh
itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam
membuat kajian yang akan datang
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Fast 3D cluster tracking for a mobile robot using 2D techniques on depth images
User simultaneous detection and tracking is an issue at the core of human-robot interaction (HRI). Several methods exist and give good results; many use image processing techniques on images provided by the camera. The increasing presence in mobile robots of range-imaging cameras (such as structured light devices as Microsoft Kinects) allows us to develop image processing on depth maps. In this article, a fast and lightweight algorithm is presented for the detection and tracking of 3D clusters thanks to classic 2D techniques such as edge detection and connected components applied to the depth maps. The recognition of clusters is made using their 2D shape. An algorithm for the compression of depth maps has been specifically developed, allowing the distribution of the whole processing among several computers. The algorithm is then applied to a mobile robot for chasing an object selected by the user. The algorithm is coupled with laser-based tracking to make up for the narrow field of view of the range-imaging camera. The workload created by the method is light enough to enable its use even with processors with limited capabilities. Extensive experimental results are given for verifying the usefulness of the proposed method.Spanish MICINN (Ministry of Science and Innovation) through the project ââApplications of Social Robots=Aplicaciones de los Robots Sociales.ââPublicad
Robot guidance using machine vision techniques in industrial environments: A comparative review
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works
Middleware platform for distributed applications incorporating robots, sensors and the cloud
Cyber-physical systems in the factory of the future
will consist of cloud-hosted software governing an agile
production process executed by autonomous mobile robots
and controlled by analyzing the data from a vast number of
sensors. CPSs thus operate on a distributed production floor
infrastructure and the set-up continuously changes with each
new manufacturing task. In this paper, we present our OSGibased
middleware that abstracts the deployment of servicebased
CPS software components on the underlying distributed
platform comprising robots, actuators, sensors and the cloud.
Moreover, our middleware provides specific support to develop
components based on artificial neural networks, a technique that
recently became very popular for sensor data analytics and robot
actuation. We demonstrate a system where a robot takes actions
based on the input from sensors in its vicinity
Development of new intelligent autonomous robotic assistant for hospitals
Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload.
The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations.
In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing.
The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces
- âŠ