413 research outputs found

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    Occlusion handling in multiple people tracking

    Get PDF
    Object tracking with occlusion handling is a challenging problem in automated video surveillance. Occlusion handling and tracking have always been considered as separate modules. We have proposed an automated video surveillance system, which automatically detects occlusions and perform occlusion handling, while the tracker continues to track resulting separated objects. A new approach based on sub-blobbing is presented for tracking objects accurately and steadily, when the target encounters occlusion in video sequences. We have used a feature-based framework for tracking, which involves feature extraction and feature matching

    Comprehensive Survey and Analysis of Techniques, Advancements, and Challenges in Video-Based Traffic Surveillance Systems

    Get PDF
    The challenges inherent in video surveillance are compounded by a several factors, like dynamic lighting conditions, the coordination of object matching, diverse environmental scenarios, the tracking of heterogeneous objects, and coping with fluctuations in object poses, occlusions, and motion blur. This research endeavor aims to undertake a rigorous and in-depth analysis of deep learning- oriented models utilized for object identification and tracking. Emphasizing the development of effective model design methodologies, this study intends to furnish a exhaustive and in-depth analysis of object tracking and identification models within the specific domain of video surveillance

    ์‹ค์‹œ๊ฐ„ ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ์‹ ๊ฒฝ ๋„คํŠธ์›Œํฌ์™€ ๊ตฐ์ง‘ํ™” ๊ธฐ๋ฐ˜ ๋ฏธํ•™์Šต ๋ฌผ์ฒด ๊ฐ์ง€๊ธฐ ํ†ตํ•ฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2020. 8. ์ด๊ฒฝ์ˆ˜.์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„, ์„ผ์„œ ๊ธฐ์ˆ ์˜ ๋ฐœ์ „๊ณผ ์ปดํ“จํ„ฐ ๊ณตํ•™ ๋ถ„์•ผ์˜ ์„ฑ๊ณผ๋“ค๋กœ ์ธํ•˜์—ฌ ์ž์œจ์ฃผํ–‰ ์—ฐ๊ตฌ๊ฐ€ ๋”์šฑ ํ™œ๋ฐœํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์— ์žˆ์–ด์„œ ์ฐจ๋Ÿ‰ ์ฃผ๋ณ€ ํ™˜๊ฒฝ์„ ์ธ์‹ํ•˜๋Š” ๊ฒƒ์€ ์•ˆ์ „ ๋ฐ ์‹ ๋ขฐ์„ฑ ์žˆ๋Š” ์ฃผํ–‰์„ ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์ด๋‹ค. ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์€ ํฌ๊ฒŒ ์ธ์ง€, ํŒ๋‹จ, ์ œ์–ด๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ธ์ง€ ๋ชจ๋“ˆ์€ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์ด ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•˜๊ณ  ํŒ๋‹จ, ์ œ์–ด๋ฅผ ํ•จ์— ์•ž์„œ ์ฃผ๋ณ€ ๋ฌผ์ฒด์˜ ์œ„์น˜์™€ ์›€์ง์ž„์„ ํŒŒ์•…ํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ๋ชจ๋“ˆ์€ ์ฃผํ–‰ ํ™˜๊ฒฝ์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์„ผ์„œ๊ฐ€ ์‚ฌ์šฉ๋œ๋‹ค. ๊ทธ ์ค‘์—์„œ๋„ LiDAR์€ ํ˜„์žฌ ๋งŽ์€ ์ž์œจ์ฃผํ–‰ ์—ฐ๊ตฌ์—์„œ ๊ฐ€์žฅ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ์„ผ์„œ ์ค‘ ํ•˜๋‚˜๋กœ, ๋ฌผ์ฒด์˜ ๊ฑฐ๋ฆฌ ์ •๋ณด ํš๋“์— ์žˆ์–ด์„œ ๋งค์šฐ ์œ ์šฉํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LiDAR์—์„œ ์ƒ์„ฑ๋˜๋Š” ํฌ์ธํŠธ ํด๋ผ์šฐ๋“œ raw ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์žฅ์• ๋ฌผ์˜ 3D ์ •๋ณด๋ฅผ ํŒŒ์•…ํ•˜๊ณ  ์ด๋“ค์„ ์ถ”์ ํ•˜๋Š” ์ธ์ง€ ๋ชจ๋“ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ์ธ์ง€ ๋ชจ๋“ˆ์˜ ์ „์ฒด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ํฌ๊ฒŒ ์„ธ ๋‹จ๊ณ„๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. 1๋‹จ๊ณ„๋Š” ๋น„์ง€๋ฉด ํฌ์ธํŠธ ์ถ”์ •์„ ์œ„ํ•œ ๋งˆ์Šคํฌ ์ƒ์„ฑ, 2๋‹จ๊ณ„๋Š” ํŠน์ง• ์ถ”์ถœ ๋ฐ ์žฅ์• ๋ฌผ ๊ฐ์ง€, 3๋‹จ๊ณ„๋Š” ์žฅ์• ๋ฌผ ์ถ”์ ์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ํ˜„์žฌ ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜์˜ ๋ฌผ์ฒด ํƒ์ง€๊ธฐ๋Š” ์ง€๋„ํ•™์Šต์„ ํ†ตํ•ด ํ•™์Šต๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ง€๋„ํ•™์Šต ๊ธฐ๋ฐ˜ ์žฅ์• ๋ฌผ ํƒ์ง€๊ธฐ๋Š” ํ•™์Šตํ•œ ์žฅ์• ๋ฌผ์„ ์ฐพ๋Š”๋‹ค๋Š” ๋ฐฉ๋ฒ•๋ก ์  ํ•œ๊ณ„๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ ์ฃผํ–‰์ƒํ™ฉ์—์„œ๋Š” ๋ฏธ์ฒ˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ๋ฌผ์ฒด๋ฅผ ๋งˆ์ฃผํ•˜๊ฑฐ๋‚˜ ์‹ฌ์ง€์–ด ํ•™์Šตํ•œ ๋ฌผ์ฒด๋„ ๋†“์น  ์ˆ˜ ์žˆ๋‹ค. ์ธ์ง€ ๋ชจ๋“ˆ์˜ 1๋‹จ๊ณ„์—์„œ ์ด๋Ÿฌํ•œ ์ง€๋„ํ•™์Šต์˜ ๋ฐฉ๋ฒ•๋ก ์  ํ•œ๊ณ„์— ๋Œ€์ฒ˜ํ•˜๊ธฐ ์œ„ํ•ด ํฌ์ธํŠธ ํด๋ผ์šฐ๋“œ๋ฅผ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ๊ตฌ์„ฑ๋œ 3D ๋ณต์…€(voxel)๋กœ ๋ถ„ํ• ํ•˜๊ณ , ์ด๋กœ๋ถ€ํ„ฐ ๋น„์ ‘์ง€์ ๋“ค์„ ์ถ”์ถœํ•œ ๋’ค ๋ฏธ์ง€์˜ ๋ฌผ์ฒด(Unknown object)๋ฅผ ํƒ์ง€ํ•œ๋‹ค. 2๋‹จ๊ณ„์—์„œ๋Š” ๊ฐ ๋ณต์…€์˜ ํŠน์„ฑ์„ ์ถ”์ถœ ๋ฐ ํ•™์Šตํ•˜๊ณ  ๋„คํŠธ์›Œํฌ๋ฅผ ํ•™์Šต์‹œํ‚ด์œผ๋กœ์จ ๊ฐ์ฒด ๊ฐ์ง€๊ธฐ๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. ๋งˆ์ง€๋ง‰ 3๋‹จ๊ณ„์—์„œ๋Š” ์นผ๋งŒ ํ•„ํ„ฐ์™€ ํ—๊ฐ€๋ฆฌ์•ˆ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ™œ์šฉํ•œ ๋‹ค์ค‘ ๊ฐ์ฒด ํƒ์ง€๊ธฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ๊ตฌ์„ฑ๋œ ์ธ์ง€ ๋ชจ๋“ˆ์€ ๋น„์ง€๋ฉด ์ ๋“ค์„ ์ถ”์ถœํ•˜์—ฌ ํ•™์Šตํ•˜์ง€ ์•Š์€ ๋ฌผ์ฒด์— ๋Œ€ํ•ด์„œ๋„ ๋ฏธ์ง€์˜ ๋ฌผ์ฒด(Unknown object)๋กœ ๊ฐ์ง€ํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์žฅ์• ๋ฌผ ํƒ์ง€๊ธฐ๋ฅผ ๋ณด์™„ํ•œ๋‹ค. ์ตœ๊ทผ ๋ผ์ด๋‹ค๋ฅผ ํ™œ์šฉํ•œ ์ž์œจ์ฃผํ–‰ ์šฉ ๊ฐ์ฒด ํƒ์ง€๊ธฐ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ์œผ๋‚˜ ๋Œ€๋ถ€๋ถ„์˜ ์—ฐ๊ตฌ๋“ค์€ ๋‹จ์ผ ํ”„๋ ˆ์ž„์˜ ๋ฌผ์ฒด ์ธ์‹์— ๋Œ€ํ•ด ์ง‘์ค‘ํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ์˜ฌ๋ฆฌ๋Š” ๋ฐ ์ง‘์ค‘ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์—ฐ๊ตฌ๋Š” ๊ฐ์ง€ ์ค‘์š”๋„์™€ ํ”„๋ ˆ์ž„ ๊ฐ„์˜ ๊ฐ์ง€ ์—ฐ์†์„ฑ ๋“ฑ์— ๋Œ€ํ•œ ๊ณ ๋ ค๊ฐ€ ๋˜์–ด์žˆ์ง€ ์•Š๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์กด์žฌํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ค์‹œ๊ฐ„ ์„ฑ๋Šฅ์„ ์–ป๊ธฐ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ๋ถ€๋ถ„์„ ๊ณ ๋ คํ•œ ์„ฑ๋Šฅ ์ง€์ˆ˜๋ฅผ ์ œ์•ˆํ•˜๊ณ , ์‹ค์ฐจ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆํ•œ ์ธ์ง€ ๋ชจ๋“ˆ์„ ํ…Œ์ŠคํŠธ, ์ œ์•ˆํ•œ ์„ฑ๋Šฅ ์ง€์ˆ˜๋ฅผ ํ†ตํ•ด ํ‰๊ฐ€ํ•˜์˜€๋‹ค.In recent few years, the interest in automotive researches on autonomous driving system has been grown up due to advances in sensing technologies and computer science. In the development of autonomous driving system, knowledge about the subject vehicles surroundings is the most essential function for safe and reliable driving. When it comes to making decisions and planning driving scenarios, to know the location and movements of surrounding objects and to distinguish whether an object is a car or pedestrian give valuable information to the autonomous driving system. In the autonomous driving system, various sensors are used to understand the surrounding environment. Since LiDAR gives the distance information of surround objects, it has been the one of the most commonly used sensors in the development of perception system. Despite achievement of the deep neural network research field, its application and research trends on 3D object detection using LiDAR point cloud tend to pursue higher accuracy without considering a practical application. A deep neural-network-based perception module heavily depends on the training dataset, but it is impossible to cover all the possibilities and corner cases. To apply the perception module in actual driving, it needs to detect unknown objects and unlearned objects, which may face on the road. To cope with these problems, in this dissertation, a perception module using LiDAR point cloud is proposed, and its performance is validated via real vehicle test. The whole framework is composed of three stages : stage-1 for the ground estimation playing as a mask for point filtering which are considered as non-ground and stage-2 for feature extraction and object detection, and stage-3 for object tracking. In the first stage, to cope with the methodological limit of supervised learning that only finds learned object, we divide a point cloud into equally spaced 3D voxels the point cloud and extract non-ground points and cluster the points to detect unknown objects. In the second stage, the voxelization is utilized to learn the characteristics of point clouds organized in vertical columns. The trained network can distinguish the object through the extracted features from point clouds. In non-maximum suppression process, we sort the predictions according to IoU between prediction and polygon to select a prediction close to the actual heading angle of the object. The last stage presents a 3D multiple object tracking solution. Through Kalman filter, the learned and unlearned objects next movement is predicted and this prediction updated by measurement detection. Through this process, the proposed object detector complements the detector based on supervised learning by detecting the unlearned object as an unknown object through non-ground point extraction. Recent researches on object detection for autonomous driving have been actively conducted, but recent works tend to focus more on the recognition of the objects at every single frame and developing accurate system. To obtain a real-time performance, this paper focuses on more practical aspects by propose a performance index considering detection priority and detection continuity. The performance of the proposed algorithm has been investigated via real-time vehicle test.Chapter 1 Introduction 1 1.1. Background and Motivation 1 1.2. Overview and Previous Researches 4 1.3. Thesis Objectives 12 1.4. Thesis Outline 14 Chapter 2 Overview of a Perception in Automated Driving 15 Chapter 3 Object Detector 18 3.1. Voxelization & Feature Extraction 22 3.2. Backbone Network 25 3.3. Detection Head & Loss Function Design 28 3.4. Loss Function Design 30 3.5. Data Augmentation 33 3.6. Post Process 39 Chapter 4 Non-Ground Point Clustering 42 4.1. Previous Researches for Ground Removal 44 4.2. Non-Ground Estimation using Voxelization 45 4.3. Non-ground Object Segmentation 50 4.3.1. Object Clustering 52 4.3.2. Bounding Polygon 55 Chapter 5 . Object Tracking 57 5.1. State Prediction and Update 58 5.2. Data Matching Association 60 Chapter 6 Test result for KITTI dataset 62 6.1. Quantitative Analysis 62 6.2. Qualitative Analysis 72 6.3. Additional Training 76 6.3.1. Additional data acquisition 78 6.3.2. Qualitative Analysis 81 Chapter 7 Performance Evaluation 85 7.1. Current Evaluation Metrics 85 7.2. Limitations of Evaluation Metrics 87 7.2.1. Detection Continuity 87 7.2.2. Detection Priority 89 7.3. Criteria for Performance Index 91 Chapter 8 Vehicle Tests based Performance Evaluation 95 8.1. Configuration of Vehicle Tests 95 8.2. Qualitative Analysis 100 8.3. Quantitative Analysis 105 Chapter 9 Conclusions and Future Works 107 Bibliography 109 ๊ตญ๋ฌธ ์ดˆ๋ก 114Docto

    Classifying tracked objects in far-field video surveillance

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 67-70).Automated visual perception of the real world by computers requires classification of observed physical objects into semantically meaningful categories (such as 'car' or 'person'). We propose a partially-supervised learning framework for classification of moving objects-mostly vehicles and pedestrians-that are detected and tracked in a variety of far-field video sequences, captured by a static, uncalibrated camera. We introduce the use of scene-specific context features (such as image-position of objects) to improve classification performance in any given scene. At the same time, we design a scene-invariant object classifier, along with an algorithm to adapt this classifier to a new scene. Scene-specific context information is extracted through passive observation of unlabelled data. Experimental results are demonstrated in the context of outdoor visual surveillance of a wide variety of scenes.by Biswajit Bose.S.M

    Applications of ISES for vegetation and land use

    Get PDF
    Remote sensing relative to applications involving vegetation cover and land use is reviewed to consider the potential benefits to the Earth Observing System (Eos) of a proposed Information Sciences Experiment System (ISES). The ISES concept has been proposed as an onboard experiment and computational resource to support advanced experiments and demonstrations in the information and earth sciences. Embedded in the concept is potential for relieving the data glut problem, enhancing capabilities to meet real-time needs of data users and in-situ researchers, and introducing emerging technology to Eos as the technology matures. These potential benefits are examined in the context of state-of-the-art research activities in image/data processing and management

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools

    Development of bent-up triangular tab shear transfer (BTTST) enhancement in cold-formed steel (CFS)-concrete composite beams

    Get PDF
    Cold-formed steel (CFS) sections, have been recognised as an important contributor to environmentally responsible and sustainable structures in developed countries, and CFS framing is considered as a sustainable 'green' construction material for low rise residential and commercial buildings. However, there is still lacking of data and information on the behaviour and performance of CFS beam in composite construction. The use of CFS has been limited to structural roof trusses and a host of nonstructural applications. One of the limiting features of CFS is the thinness of its section (usually between 1.2 and 3.2 mm thick) that makes it susceptible to torsional, distortional, lateral-torsional, lateral-distortional and local buckling. Hence, a reasonable solution is resorting to a composite construction of structural CFS section and reinforced concrete deck slab, which minimises the distance from the neutral-axis to the top of the deck and reduces the compressive bending stress in the CFS sections. Also, by arranging two CFS channel sections back-to-back restores symmetricity and suppresses lateraltorsional and to a lesser extent, lateral-distortional buckling. The two-fold advantages promised by the system, promote the use of CFS sections in a wider range of structural applications. An efficient and innovative floor system of built-up CFS sections acting compositely with a concrete deck slab was developed to provide an alternative composite system for floors and roofs in buildings. The system, called Precast Cold-Formed SteelConcrete Composite System, is designed to rely on composite actions between the CFS sections and a reinforced concrete deck where shear forces between them are effectively transmitted via another innovative shear transfer enhancement mechanism called a bentup triangular tab shear transfer (BTTST). The study mainly comprises two major components, i.e. experimental and theoretical work. Experimental work involved smallscale and large-scale testing of laboratory tests. Sixty eight push-out test specimens and fifteen large-scale CFS-concrete composite beams specimens were tested in this program. In the small-scale test, a push-out test was carried out to determine the strength and behaviour of the shear transfer enhancement between the CFS and concrete. Four major parameters were studied, which include compressive strength of concrete, CFS strength, dimensions (size and angle) of BTTST and CFS thickness. The results from push-out test were used to develop an expression in order to predict the shear capacity of innovative shear transfer enhancement mechanism, BTTST in CFS-concrete composite beams. The value of shear capacity was used to calculate the theoretical moment capacity of CFSconcrete composite beams. The theoretical moment capacities were used to validate the large-scale test results. The large-scale test specimens were tested by using four-point load bending test. The results in push-out tests show that specimens employed with BTTST achieved higher shear capacities compared to those that rely only on a natural bond between cold-formed steel and concrete and specimens with Lakkavalli and Liu bent-up tab (LYLB). Load capacities for push-out test specimens with BTTST are ii relatively higher as compared to the equivalent control specimen, i.e. by 91% to 135%. When compared to LYLB specimens the increment is 12% to 16%. In addition, shear capacities of BTTST also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength. An equation was developed to determine the shear capacity of BTTST and the value is in good agreement with the observed test values. The average absolute difference between the test values and predicted values was found to be 8.07%. The average arithmetic mean of the test/predicted ratio (n) of this equation is 0.9954. The standard deviation (a) and the coefficient of variation (CV) for the proposed equation were 0.09682 and 9.7%, respectively. The proposed equation is recommended for the design of BTTST in CFSconcrete composite beams. In large-scale testing, specimens employed with BTTST increased the strength capacities and reduced the deflection of the specimens. The moment capacities, MU ) e X p for all specimens are above Mu>theory and show good agreement with the calculated ratio (>1.00). It is also found that, strength capacities of CFS-concrete composite beams also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength and a CFS-concrete composite beam are practically designed with partial shear connection for equal moment capacity by reducing number of BTTST. It is concluded that the proposed BTTST shear transfer enhancement in CFS-concrete composite beams has sufficient strength and is also feasible. Finally, a standard table of characteristic resistance, P t a b of BTTST in normal weight concrete, was also developed to simplify the design calculation of CFSconcrete composite beams

    Scalable and adaptable tracking of humans in multiple camera systems

    Get PDF
    The aim of this thesis is to track objects on a network of cameras both within [intra) and across (inter) cameras. The algorithms must be adaptable to change and are learnt in a scalable approach. Uncalibrated cameras are used that are patially separated, and therefore tracking must be able to cope with object oclusions, illuminations changes, and gaps between cameras.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Videogames: the new GIS?

    Get PDF
    Videogames and GIS have more in common than might be expected. Indeed, it is suggested that videogame technology may not only be considered as a kind of GIS, but that in several important respects its world modelling capabilities out-perform those of most GIS. This chapter examines some of the key differences between videogames and GIS, explores a number of perhaps-surprising similarities between their technologies, and considers which ideas might profitably be borrowed from videogames to improve GIS functionality and usability
    • โ€ฆ
    corecore