1,762 research outputs found

    Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris

    Get PDF
    Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai 325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap. Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam membuat kajian yang akan datang

    A Networking Framework for Multi-Robot Coordination

    Get PDF
    Autonomous robots operating in real environments need to be able to interact with a dynamic world populated with objects, people, and, in general, other agents. The current generation of autonomous robots, such as the ASIMO robot by Honda or the QRIO by Sony, has showed impressive performances in mechanics and control of movements; moreover, recent literature reports encouraging results about the capability of such robots of representing themselves with respect to a dynamic external world, of planning future actions and of evaluating resulting situations in order to make new plans. However, when multiple robots are supposed to operate together, coordination and communication issues arise; while noteworthy results have been achieved with respect to the control of a single robot, novel issues arise when the actions of a robot influence another''s behavior. The increase in computational power available to systems nowadays makes it feasible, and even convenient, to organize them into a single distributed computing environment in order to exploit the synergy among different entities. This is especially true for robot teams, where cooperation is supposed to be the most natural scheme of operation, especially when robots are required to operate in highly constrained scenarios, such as inhospitable sites, remote sites, or indoor environments where strict constraints on intrusiveness must be respected. In this case, computations will be inherently network-centric, and to solve the need for communication inside robot collectives, an efficient network infrastructure must be put into place; once a proper communication channel is established, multiple robots may benefit from the interaction with each other in order to achieve a common goal. The framework presented in this paper adopts a composite networking architecture, in which a hybrid wireless network, composed by commonly available WiFi devices, and the more recently developed wireless sensor networks, operates as a whole in order both to provide a communication backbone for the robots and to extract useful information from the environment. The ad-hoc WiFi backbone allows robots to exchange coordination information among themselves, while also carrying data measurements collected from surrounding environment, and useful for localization or mere data gathering purposes. The proposed framework is called RoboNet, and extends a previously developed robotic tour guide application (Chella et al., 2007) in the context of a multi-robot application; our system allows a team of robots to enhance their perceptive capabilities through coordination obtained via a hybrid communication network; moreover, the same infrastructure allows robots to exchange information so as to coordinate their actions in order to achieve a global common goal. The working scenario considered in this paper consists of a museum setting, where guided tours are to be automatically managed. The museum is arranged both chronologically and topographically, but the sequence of findings to be visited can be rearranged depending on user queries, making a sort of dynamic virtual labyrinth with various itineraries. Therefore, the robots are able to guide visitors both in prearranged tours and in interactive tours, built in itinere depending on the interaction with the visitor: robots are able to rebuild the virtual connection between findings and, consequently, the path to be followed. This paper is organized as follows. Section 2 contains some background on multi-robot coordination, and Section 3 describes the underlying ideas and the motivation behind the proposed architecture, whose details are presented in Sections 4, 5, and 6. A realistic application scenario is described in Section 7, and finally our conclusions are drawn in Section 8

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Navigation, Path Planning, and Task Allocation Framework For Mobile Co-Robotic Service Applications in Indoor Building Environments

    Full text link
    Recent advances in computing and robotics offer significant potential for improved autonomy in the operation and utilization of today’s buildings. Examples of such building environment functions that could be improved through automation include: a) building performance monitoring for real-time system control and long-term asset management; and b) assisted indoor navigation for improved accessibility and wayfinding. To enable such autonomy, algorithms related to task allocation, path planning, and navigation are required as fundamental technical capabilities. Existing algorithms in these domains have primarily been developed for outdoor environments. However, key technical challenges that prevent the adoption of such algorithms to indoor environments include: a) the inability of the widely adopted outdoor positioning method (Global Positioning System - GPS) to work indoors; and b) the incompleteness of graph networks formed based on indoor environments due to physical access constraints not encountered outdoors. The objective of this dissertation is to develop general and scalable task allocation, path planning, and navigation algorithms for indoor mobile co-robots that are immune to the aforementioned challenges. The primary contributions of this research are: a) route planning and task allocation algorithms for centrally-located mobile co-robots charged with spatiotemporal tasks in arbitrary built environments; b) path planning algorithms that take preferential and pragmatic constraints (e.g., wheelchair ramps) into consideration to determine optimal accessible paths in building environments; and c) navigation and drift correction algorithms for autonomous mobile robotic data collection in buildings. The developed methods and the resulting computational framework have been validated through several simulated experiments and physical deployments in real building environments. Specifically, a scenario analysis is conducted to compare the performance of existing outdoor methods with the developed approach for indoor multi-robotic task allocation and route planning. A simulated case study is performed along with a pilot experiment in an indoor built environment to test the efficiency of the path planning algorithm and the performance of the assisted navigation interface developed considering people with physical disabilities (i.e., wheelchair users) as building occupants and visitors. Furthermore, a case study is performed to demonstrate the informed retrofit decision-making process with the help of data collected by an intelligent multi-sensor fused robot that is subsequently used in an EnergyPlus simulation. The results demonstrate the feasibility of the proposed methods in a range of applications involving constraints on both the environment (e.g., path obstructions) and robot capabilities (e.g., maximum travel distance on a single charge). By focusing on the technical capabilities required for safe and efficient indoor robot operation, this dissertation contributes to the fundamental science that will make mobile co-robots ubiquitous in building environments in the near future.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143969/1/baddu_1.pd

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    Heterogeneous context-aware robots providing a personalized building tour regular paper

    Get PDF
    Existing robot guides offer a tour of a building, such as a museum or science centre, to one or more visitors. Usually the tours are predefined and lack support for dynamic interactions between the different robots. This paper focuses on the distributed collaboration of multiple heterogeneous robots (receptionist, companion) guiding visitors through a building. Semantic techniques support the formal definition of tour topics, the available content on a specific topic, and the robot and person profiles including interests and acquired knowledge. The robot guides select topics depending on their participants' interests and prior knowledge. Whenever one guide moves into the proximity of another, the guides automatically exchange participants, optimizing the amount of interesting topics. Robot collaboration is realized through the development of a software module that allows a robot to transparently include behaviours performed by other robots into its own set of behaviours. The multi-robot visitor guide application is integrated into an extended distributed heterogeneous robot team, using a receptionist robot that was not originally designed to cooperate with the guides. Evaluation of the implemented algorithms presents a 90% content coverage of relevant topics for the participants

    Autonomous Tour Guide Robot using RF Module

    Get PDF
    Tourists and tourist locations are a major source of income to any country’s economy. A good tourist location can help improve the economic standard of a place. So it is very important that there are adequate measures to improve the tourism sector. The best way to see and enjoy a place is on our own, because when we find out a thing on our own, the experience remains with us ever. This is the main idea that brought us to make this project. We are presenting in this paper a tour guide robot that will take tourists through a location of interest, using a robot navigator. The device is equipped with an Arduino module that controls the device. The user needs to use a mobile application to pair with the device and start the robot, once started it will take the user to different locations and will inform him of the details or importance of each location. The device is also equipped with a panic button and a heart-rate sensor that helps to inform the authorities if any emergency situation arises
    corecore