27,125 research outputs found

    이종 센서들을 이용한 지능형 공간의 운용

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이범희.A new approach of multi-sensor operation is presented in an intelligent space, which is based on heterogeneous multiple vision sensors and robots mounted with an infrared (IR) sensor. The intelligent space system is a system that exists in task space of robots, helps missions of the robots, and can self-control the robots in a particular situation. The conventional intelligent space consists of solely static cameras. However, the adoption of multiple heterogeneous sensors and an operation technique for the sensors are required in order to extend the ability of intelligent space. First, this dissertation presents the sub-systems for each sensor group in the proposed intelligent space. The vision sensors consist of two groups: static (fixed) cameras and dynamic (pan-tilt) cameras. Each sub-system can detect and track the robots. The sub-system using static cameras localize the robot within a high degree of accuracy. In this system, a handoff method is proposed using the world-to-pixel transformation in order to interwork among the multiple static cameras. The sub-system using dynamic cameras is designed to have various views without losing the robot in view. In this system, a handoff method is proposed using the predictive positions of the robot, relationship among cameras, and relationship between the robot and the camera in order to interwork among the multiple dynamic cameras. The robots system localizes itself using an IR sensor and IR tags. The IR sensor can localize the robot even if illumination of the environment is low. For robust tracking, a sensor selection method is proposed using the advantages of these sensors under environmental change of the task space. For the selection method, we define interface protocol among the sub-systems, sensor priority, and selection criteria. The proposed method is adequate for a real-time system, which has a low computational cost than sensor fusion methods. Performance of each sensor group is verified through various experiments. In addition, multi-sensor operation using the proposed sensor selection method is experimentally verified in the environment with an occlusion and low-illumination setting.Abstracts i Contents iii List of Figures vii List of Tables xv Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Related Work 4 1.3 Contributions 7 1.4 Organization 10 Chapter 2 Overview of Intelligent Space 11 2.1 Original Concept of Intelligent Space 11 2.2 Related Research 13 2.3 Problem Statement and Objective 16 Chpater 3 Architecture of a Proposed Intelligent Space 18 3.1 Hardware Architecture 19 3.2.1 Metallic Structure 20 3.2.2 Static Cameras 22 3.2.3 Dynamic Cameras 24 3.2.4 Infrared (IR) Camera and Passive IR Tags 27 3.2.5 Mobile Robots 28 3.2 Software Architecture 31 Chpater 4 Localization and Tracking of Mobile Robots in a Proposed Intelligent Space 36 4.1 Localization and Tracking with an IR Sensor Mounted on Robots 36 4.1.1 Deployment of IR Tags 36 4.1.2 Localization and Tracking Using an IR Sensor 38 4.2 Localization and Tracking with Multiple Dynamic Cameras 41 4.2.1 Localization and Tracking based on the Geometry between a Robot and a Single Dynamic Camera 41 4.2.2 Proposed Predictive Handoff among Dynamic Cameras 45 4.3 Localization and Tracking with Multiple Static Cameras 53 4.3.1 Preprocess for Static Cameras 53 4.3.2 Marker-based Localization and Tracking of Multiple Robots 58 4.3.3 Proposed Reprojection-based Handoff among Static Cameras 67 Chpater 5 Operation with Heterogeneous Sensor Groups 72 5.1 Interface Protocol among Sensor Groups 72 5.2 Sensor Selection for an Operation Using Heterogeneous Sensors 84 5.3 Proposed Operation with Static Cameras and Dynamic cameras 87 5.4 Proposed Operation with the iSpace and Robots 90 Chapter 6 Experimental Results 94 6.1 Experimental Setup 94 6.2 Experimental Results of Localization 95 6.2.1 Results using Static Cameras and Dynamic Cameras 95 6.2.2 Results using the IR Sensor 102 6.3 Experimental Results of Tracking 104 6.3.1 Results using Static and Dynamic Cameras 104 6.3.2 Results using the IR Sensor 108 6.4 Experimental Results using Heterogeneous Sensors 111 6.4.1 Results in Environment with Occlusion 111 6.4.2 Results in Low-illumination Environment 115 6.5 Discussion 118 Chapter 7 Conclusions 120 Bibliography 125Docto

    Characterization of a Multi-User Indoor Positioning System Based on Low Cost Depth Vision (Kinect) for Monitoring Human Activity in a Smart Home

    Get PDF
    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Towards Odor-Sensitive Mobile Robots

    Get PDF
    J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012 Versión preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical ones when operating in real environments. Until now, these sensorial systems mostly relied on range sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely been employed, they can provide a complementary sensory information, vital for some applications, as with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities and also reviews some of the hurdles that are preventing smell from achieving the importance of other sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status on the three main fields within robotics olfaction: the classification of volatile substances, the spatial estimation of the gas dispersion from sparse measurements, and the localization of the gas source within a known environment
    corecore