395 research outputs found

    SEMANTIC ANALYSIS AND UNDERSTANDING OF HUMAN BEHAVIOUR IN VIDEO STREAMING

    Get PDF
    This thesis investigates the semantic analysis of the human behaviour captured by video streaming, both from the theoretical and technological points of view. The video analysis based on the semantic content is in fact still an open issue for the computer vision research community, especially when real-time analysis of complex scenes is concerned. Automated video analysis can be described and performed at different abstraction levels, from the pixel analysis up to the human behaviour understanding. Similarly, the organisation of computer vision systems is often hierarchical with low-level image processing techniques feeding into tracking algorithms and, then, into higher level scene analysis and/or behaviour analysis modules. Each level of this hierarchy has its open issues, among which the main ones are: - motion and object detection: dynamic background modelling, ghosts, suddenly changes in illumination conditions; - object tracking: modelling and estimating the dynamics of moving objects, presence of occlusions; - human behaviour identification: human behaviour patterns are characterized by ambiguity, inconsistency and time-variance. Researchers proposed various approaches which partially address some aspects of the above issues from the perspective of the semantic analysis and understanding of the video streaming. Many progresses were achieved, but usually not in a comprehensive way and often without reference to the actual operating situations. A popular class of approaches has been devised to enhance the quality of the semantic analysis by exploiting some background knowledge about scene and/or the human behaviour, thus narrowing the huge variety of possible behavioural patterns by focusing on a specific narrow domain. In general, the main drawback of the existing approaches to semantic analysis of the human behaviour, even in narrow domains, is inefficiency due to the high computational complexity related to the complex models representing the dynamics of the moving objects and the patterns of the human behaviours. In this perspective this thesis explores an innovative, original approach to human behaviour analysis and understanding by using the syntactical symbolic analysis of images and video streaming described by means of strings of symbols. A symbol is associated to each area of the analysed scene. When a moving object enters an area, the corresponding symbol is appended to the string describing the motion. This approach allows for characterizing the motion of a moving object with a word composed by symbols. By studying and classifying these words we can categorize and understand the various behaviours. The main advantage of this approach consists in the simplicity of the scene and motion descriptions so that the behaviour analysis will have limited computational complexity due to the intrinsic nature both of the representations and the related operations used to manipulate them. Besides, the structure of the representations is well suited for possible parallel processing, thus allowing for speeding up the analysis when appropriate hardware architectures are used. The theoretical background, the original theoretical results underlying this approach, the human behaviour analysis methodology, the possible implementations, and the related performance are presented and discussed in the thesis. To show the effectiveness of the proposed approach, a demonstrative system has been implemented and applied to a real indoor environment with valuable results. Furthermore, this thesis proposes an innovative method to improve the overall performance of the object tracking algorithm. This method is based on using two cameras to record the same scene from different point of view without introducing any constraint on cameras\u2019 position. The image fusion task is performed by solving the correspondence problem only for few relevant points. This approach reduces the problem of partial occlusions in crowded scenes. Since this method works at a level lower than that of semantic analysis, it can be applied also in other systems for human behaviour analysis and it can be seen as an optional method to improve the semantic analysis (because it reduces the problem of partial occlusions)

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Activity Report 2021 : Automatic Control, Lund University

    Get PDF

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    자율주행을 위한 카메라 기반 거리 측정 및 측위

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 서승우.Automated driving vehicles or advanced driver assistance systems (ADAS) have continued to be an important research topic in transportation area. They can promise to reduce road accidents and eliminate traffic congestions. Automated driving vehicles are composed of two parts. On-board sensors are used to observe the environments and then, the captured sensor data are processed to interpret the environments and to make appropriate driving decisions. Some sensors have already been widely used in existing driver-assistance systems, e.g., camera systems are used in lane-keeping systems to recognize lanes on roadsradars (Radio Detection And Ranging) are used in adaptive cruise systems to measure the distance to a vehicle ahead such that a safe distance can be guaranteedLIDAR (Light Detection And Ranging) sensors are used in the autonomous emergency braking system to detect other vehicles or pedestrians in the vehicle path to avoid collisionaccelerometers are used to measure vehicle speed changes, which are especially useful for air-bagswheel encoder sensors are used to measure wheel rotations in a vehicle anti-lock brake system and GPS sensors are embedded on vehicles to provide the global positions of the vehicle for path navigation. In this dissertation, we cover three important application for automated driving vehicles by using camera sensors in vehicular environments. Firstly, precise and robust distance measurement is one of the most important requirements for driving assistance systems and automated driving systems. We propose a new method for providing accurate distance measurements through a frequency-domain analysis based on a stereo camera by exploiting key information obtained from the analysis of captured images. Secondly, precise and robust localization is another important requirement for safe automated driving. We propose a method for robust localization in diverse driving situations that measures the vehicle positions using a camera with respect to a given map for vision based navigation. The proposed method includes technology for removing dynamic objects and preserving features in vehicular environments using a background model accumulated from previous frames and we improve image quality using illuminant invariance characteristics of the log-chromaticity. We also propose a vehicle localization method using structure tensor and mutual information theory. Finally, we propose a novel algorithm for estimating the drivable collision-free space for autonomous navigation of on-road vehicles. In contrast to previous approaches that use stereo cameras or LIDAR, we solve this problem using a sensor fusion of cameras and LIDAR.1 Introduction 1 1.1 Background and Motivations 1 1.2 Contributions and Outline of the Dissertation 3 1.2.1 Accurate Object Distance Estimation based on Frequency-Domain Analysis with a Stereo Camera 3 1.2.2 Visual Map Matching based on Structural Tensor and Mutual Information using 3D High Resolution Digital Map 3 1.2.3 Free Space Computation using a Sensor Fusion of LIDAR and RGB camera in Vehicular Environment 4 2 Accurate Object Distance Estimation based on Frequency-Domain Analysis with a Stereo Camera 5 2.1 Introduction 5 2.2 Related Works 7 2.3 Algrorithm Description 10 2.3.1 Overall Procedure 10 2.3.2 Preliminaries 12 2.3.3 Pre-processing 12 2.4 Frequency-domain Analysis 15 2.4.1 Procedure 15 2.4.2 Contour-based Cost Computation 20 2.5 Cost Optimization and Distance Estimation 21 2.5.1 Disparity Optimization 21 2.5.2 Post-processing and Distance Estimation 23 2.6 Experimental Results 24 2.6.1 Test Environment 24 2.6.2 Experiment on KITTI Dataset 25 2.6.3 Performance Evaluation and Analysis 28 2.7 Conclusion 32 3 Visual Map Matching Based on Structural Tensor and Mutual Information using 3D High Resolution Digital Map 33 3.1 Introduction 33 3.2 Related Work 35 3.3 Methodology 37 3.3.1 Sensor Calibration 37 3.3.2 Digital Map Generation and Synthetic View Conversion 39 3.3.3 Dynamic Object Removal 41 3.3.4 Illuminant Invariance 43 3.3.5 Visual Map Matching using Structure Tensor and Mutual Information 43 3.4 Experiments and Result 49 3.4.1 Methodology 49 3.4.2 Quantitative Results 53 3.5 Conclusions and Future Works 54 4 Free Space Computation using a Sensor Fusion of LIDAR and RGB Camera in Vehicular Environments 55 4.1 Introduction 55 4.2 Methodology 57 4.2.1 Dense Depth Map Generation 57 4.2.2 Color Distribution Entropy 58 4.2.3 Edge Extraction 60 4.2.4 Temporal Smoothness 61 4.2.5 Spatial Smoothness 62 4.3 Experiment and Evaluation 63 4.3.1 Evaluated Methods 63 4.3.2 Experiment on KITTI Dataset 64 4.4 Conclusion 68 5 Conclusion 70 Abstract (In Korean) 87Docto

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways

    SInCom 2015

    Get PDF
    2nd Baden-Württemberg Center of Applied Research Symposium on Information and Communication Systems, SInCom 2015, 13. November 2015 in Konstan

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
    corecore