12,774 research outputs found

    Tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi MARA terhadap mata pelajaran Bahasa Inggeris

    Get PDF
    Kajian ini dilakukan untuk mengenal pasti tahap penguasaan, sikap dan minat pelajar Kolej Kemahiran Tinggi Mara Sri Gading terhadap Bahasa Inggeris. Kajian yang dijalankan ini berbentuk deskriptif atau lebih dikenali sebagai kaedah tinjauan. Seramai 325 orang pelajar Diploma in Construction Technology dari Kolej Kemahiran Tinggi Mara di daerah Batu Pahat telah dipilih sebagai sampel dalam kajian ini. Data yang diperoleh melalui instrument soal selidik telah dianalisis untuk mendapatkan pengukuran min, sisihan piawai, dan Pekali Korelasi Pearson untuk melihat hubungan hasil dapatan data. Manakala, frekuensi dan peratusan digunakan bagi mengukur penguasaan pelajar. Hasil dapatan kajian menunjukkan bahawa tahap penguasaan Bahasa Inggeris pelajar adalah berada pada tahap sederhana manakala faktor utama yang mempengaruhi penguasaan Bahasa Inggeris tersebut adalah minat diikuti oleh sikap. Hasil dapatan menggunakan pekali Korelasi Pearson juga menunjukkan bahawa terdapat hubungan yang signifikan antara sikap dengan penguasaan Bahasa Inggeris dan antara minat dengan penguasaan Bahasa Inggeris. Kajian menunjukkan bahawa semakin positif sikap dan minat pelajar terhadap pengajaran dan pembelajaran Bahasa Inggeris semakin tinggi pencapaian mereka. Hasil daripada kajian ini diharapkan dapat membantu pelajar dalam meningkatkan penguasaan Bahasa Inggeris dengan memupuk sikap positif dalam diri serta meningkatkan minat mereka terhadap Bahasa Inggeris dengan lebih baik. Oleh itu, diharap kajian ini dapat memberi panduan kepada pihak-pihak yang terlibat dalam membuat kajian yang akan datang

    이종 센서들을 이용한 지능형 공간의 운용

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이범희.A new approach of multi-sensor operation is presented in an intelligent space, which is based on heterogeneous multiple vision sensors and robots mounted with an infrared (IR) sensor. The intelligent space system is a system that exists in task space of robots, helps missions of the robots, and can self-control the robots in a particular situation. The conventional intelligent space consists of solely static cameras. However, the adoption of multiple heterogeneous sensors and an operation technique for the sensors are required in order to extend the ability of intelligent space. First, this dissertation presents the sub-systems for each sensor group in the proposed intelligent space. The vision sensors consist of two groups: static (fixed) cameras and dynamic (pan-tilt) cameras. Each sub-system can detect and track the robots. The sub-system using static cameras localize the robot within a high degree of accuracy. In this system, a handoff method is proposed using the world-to-pixel transformation in order to interwork among the multiple static cameras. The sub-system using dynamic cameras is designed to have various views without losing the robot in view. In this system, a handoff method is proposed using the predictive positions of the robot, relationship among cameras, and relationship between the robot and the camera in order to interwork among the multiple dynamic cameras. The robots system localizes itself using an IR sensor and IR tags. The IR sensor can localize the robot even if illumination of the environment is low. For robust tracking, a sensor selection method is proposed using the advantages of these sensors under environmental change of the task space. For the selection method, we define interface protocol among the sub-systems, sensor priority, and selection criteria. The proposed method is adequate for a real-time system, which has a low computational cost than sensor fusion methods. Performance of each sensor group is verified through various experiments. In addition, multi-sensor operation using the proposed sensor selection method is experimentally verified in the environment with an occlusion and low-illumination setting.Abstracts i Contents iii List of Figures vii List of Tables xv Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Related Work 4 1.3 Contributions 7 1.4 Organization 10 Chapter 2 Overview of Intelligent Space 11 2.1 Original Concept of Intelligent Space 11 2.2 Related Research 13 2.3 Problem Statement and Objective 16 Chpater 3 Architecture of a Proposed Intelligent Space 18 3.1 Hardware Architecture 19 3.2.1 Metallic Structure 20 3.2.2 Static Cameras 22 3.2.3 Dynamic Cameras 24 3.2.4 Infrared (IR) Camera and Passive IR Tags 27 3.2.5 Mobile Robots 28 3.2 Software Architecture 31 Chpater 4 Localization and Tracking of Mobile Robots in a Proposed Intelligent Space 36 4.1 Localization and Tracking with an IR Sensor Mounted on Robots 36 4.1.1 Deployment of IR Tags 36 4.1.2 Localization and Tracking Using an IR Sensor 38 4.2 Localization and Tracking with Multiple Dynamic Cameras 41 4.2.1 Localization and Tracking based on the Geometry between a Robot and a Single Dynamic Camera 41 4.2.2 Proposed Predictive Handoff among Dynamic Cameras 45 4.3 Localization and Tracking with Multiple Static Cameras 53 4.3.1 Preprocess for Static Cameras 53 4.3.2 Marker-based Localization and Tracking of Multiple Robots 58 4.3.3 Proposed Reprojection-based Handoff among Static Cameras 67 Chpater 5 Operation with Heterogeneous Sensor Groups 72 5.1 Interface Protocol among Sensor Groups 72 5.2 Sensor Selection for an Operation Using Heterogeneous Sensors 84 5.3 Proposed Operation with Static Cameras and Dynamic cameras 87 5.4 Proposed Operation with the iSpace and Robots 90 Chapter 6 Experimental Results 94 6.1 Experimental Setup 94 6.2 Experimental Results of Localization 95 6.2.1 Results using Static Cameras and Dynamic Cameras 95 6.2.2 Results using the IR Sensor 102 6.3 Experimental Results of Tracking 104 6.3.1 Results using Static and Dynamic Cameras 104 6.3.2 Results using the IR Sensor 108 6.4 Experimental Results using Heterogeneous Sensors 111 6.4.1 Results in Environment with Occlusion 111 6.4.2 Results in Low-illumination Environment 115 6.5 Discussion 118 Chapter 7 Conclusions 120 Bibliography 125Docto

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Graph Optimization Approach to Range-based Localization

    Full text link
    In this paper, we propose a general graph optimization based framework for localization, which can accommodate different types of measurements with varying measurement time intervals. Special emphasis will be on range-based localization. Range and trajectory smoothness constraints are constructed in a position graph, then the robot trajectory over a sliding window is estimated by a graph based optimization algorithm. Moreover, convergence analysis of the algorithm is provided, and the effects of the number of iterations and window size in the optimization on the localization accuracy are analyzed. Extensive experiments on quadcopter under a variety of scenarios verify the effectiveness of the proposed algorithm and demonstrate a much higher localization accuracy than the existing range-based localization methods, especially in the altitude direction

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems
    corecore