41,055 research outputs found

    IoT-Driven Automated Object Detection Algorithm for Urban Surveillance Systems in Smart Cities

    Get PDF
    Automated object detection algorithm is an important research challenge in intelligent urban surveillance systems for Internet of Things (IoT) and smart cities applications. In particular, smart vehicle license plate recognition and vehicle detection are recognized as core research issues of these IoT-driven intelligent urban surveillance systems. They are key techniques in most of the traffic related IoT applications, such as road traffic real-time monitoring, security control of restricted areas, automatic parking access control, searching stolen vehicles, etc. In this paper, we propose a novel unified method of automated object detection for urban surveillance systems. We use this novel method to determine and pick out the highest energy frequency areas of the images from the digital camera imaging sensors, that is, either to pick the vehicle license plates or the vehicles out from the images. Our proposed method can not only help to detect object vehicles rapidly and accurately, but also can be used to reduce big data volume needed to be stored in urban surveillance systems

    Effective image enhancement and fast object detection for improved UAV applications

    Get PDF
    As an emerging field, unmanned aerial vehicles (UAVs) feature from interdisciplinary techniques in science, engineering and industrial sectors. The massive applications span from remote sensing, precision agriculture, marine inspection, coast guarding, environmental monitoring, natural resources monitoring, e.g. forest, land and river, and disaster assessment, to smart city, intelligent transportation and logistics and delivery. With the fast growing demands from a wide range of application sectors, there is always a bottleneck how to improve the efficiency and efficacy of UAV in operation. Often, smart decision making is needed from the captured footages in a real-time manner, yet this is severely affected by the poor image quality, ineffective object detection and recognition models, and lack of robust and light models for supporting the edge computing and real deployment. In this thesis, several innovative works have been focused and developed to tackle some of the above issues. First of all, considering the quality requirements of the UAV images, various approaches and models have been proposed, yet they focus on different aspects and produce inconsistent results. As such, the work in this thesis has been categorised into denoising and dehazing focused, followed by comprehensive evaluation in terms of both qualitative and quantitative assessment. These will provide valuable insights and useful guidance to help the end user and research community. For fast and effective object detection and recognition, deep learning based models, especially the YOLO series, are popularly used. However, taking the YOLOv7 as the baseline, the performance is very much affected by a few factors, such as the low quality of the UAV images and the high-level of demanding of resources, leading to unsatisfactory performance in accuracy and processing speed. As a result, three major improvements, namely transformer, CIoULoss and the GhostBottleneck module, are introduced in this work to improve feature extraction, decision making in detection and recognition, and running efficiency. Comprehensive experiments on both publicly available and self-collected datasets have validated the efficiency and efficacy of the proposed algorithm. In addition, to facilitate the real deployment such as edge computing scenarios, embedded implementation of the key algorithm modules is introduced. These include the creative implementation on the Xavier NX platform, in comparison to the standard workstation settings with the NVIDIA GPUs. As a result, it has demonstrated promising results with improved performance in reduced resources consumption of the CPU/GPU usage and enhanced frame rate of real-time processing to benefit the real-time deployment with the uncompromised edge computing. Through these innovative investigation and development, a better understanding has been established on key challenges associated with UAV and Simultaneous Localisation and Mapping (SLAM) based applications, and possible solutions are presented. Keywords: Unmanned aerial vehicles (UAV); Simultaneous Localisation and Mapping (SLAM); denoising; dehazing; object detection; object recognition; deep learning; YOLOv7; transformer; GhostBottleneck; scene matching; embedded implementation; Xavier NX; edge computing.As an emerging field, unmanned aerial vehicles (UAVs) feature from interdisciplinary techniques in science, engineering and industrial sectors. The massive applications span from remote sensing, precision agriculture, marine inspection, coast guarding, environmental monitoring, natural resources monitoring, e.g. forest, land and river, and disaster assessment, to smart city, intelligent transportation and logistics and delivery. With the fast growing demands from a wide range of application sectors, there is always a bottleneck how to improve the efficiency and efficacy of UAV in operation. Often, smart decision making is needed from the captured footages in a real-time manner, yet this is severely affected by the poor image quality, ineffective object detection and recognition models, and lack of robust and light models for supporting the edge computing and real deployment. In this thesis, several innovative works have been focused and developed to tackle some of the above issues. First of all, considering the quality requirements of the UAV images, various approaches and models have been proposed, yet they focus on different aspects and produce inconsistent results. As such, the work in this thesis has been categorised into denoising and dehazing focused, followed by comprehensive evaluation in terms of both qualitative and quantitative assessment. These will provide valuable insights and useful guidance to help the end user and research community. For fast and effective object detection and recognition, deep learning based models, especially the YOLO series, are popularly used. However, taking the YOLOv7 as the baseline, the performance is very much affected by a few factors, such as the low quality of the UAV images and the high-level of demanding of resources, leading to unsatisfactory performance in accuracy and processing speed. As a result, three major improvements, namely transformer, CIoULoss and the GhostBottleneck module, are introduced in this work to improve feature extraction, decision making in detection and recognition, and running efficiency. Comprehensive experiments on both publicly available and self-collected datasets have validated the efficiency and efficacy of the proposed algorithm. In addition, to facilitate the real deployment such as edge computing scenarios, embedded implementation of the key algorithm modules is introduced. These include the creative implementation on the Xavier NX platform, in comparison to the standard workstation settings with the NVIDIA GPUs. As a result, it has demonstrated promising results with improved performance in reduced resources consumption of the CPU/GPU usage and enhanced frame rate of real-time processing to benefit the real-time deployment with the uncompromised edge computing. Through these innovative investigation and development, a better understanding has been established on key challenges associated with UAV and Simultaneous Localisation and Mapping (SLAM) based applications, and possible solutions are presented. Keywords: Unmanned aerial vehicles (UAV); Simultaneous Localisation and Mapping (SLAM); denoising; dehazing; object detection; object recognition; deep learning; YOLOv7; transformer; GhostBottleneck; scene matching; embedded implementation; Xavier NX; edge computing

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Real-time Motion Planning For Autonomous Car in Multiple Situations Under Simulated Urban Environment

    Get PDF
    Advanced autonomous cars have revolutionary meaning for the automobile industry. While more and more companies have already started to build their own autonomous cars, no one has yet brought a practical autonomous car into the market. One key problem of their cars is lacking a reliable active real-time motion planning system for the urban environment. A real-time motion planning system makes cars can safely and stably drive under the urban environment. The final goal for this project is to design and implement a reliable real-time motion planning system to reduce accident rates in autonomous cars instead of human drivers. The real-time motion planning system includes lane-keeping, obstacle avoidance, moving car avoidance, adaptive cruise control, and accident avoidance function. In the research, EGO vehicles will be built and equipped with an image processing unit, a LIDAR, and two ultrasonic sensors to detect the environment. These environment data make it possible to implement a full control program in the real-time motion planning system. The control program will be implemented and tested in a scaled-down EGO vehicle with a scaled-down urban environment. The project has been divided into three phases: build EGO vehicles, implement the control program of the real-time motion planning system, and improve the control program by testing under the scale-down urban environment. In the first phase, each EGO vehicle will be built by an EGO vehicle chassis kit, a Raspberry Pi, a LIDAR, two ultrasonic sensors, a battery, and a power board. In the second phase, the control program of the real-time motion planning system will be implemented under the lane-keeping program in Raspberry Pi. Python is the programming language that will be used to implement the program. Lane-keeping, obstacle avoidance, moving car avoidance, adaptive cruise control functions will be built in this control program. In the last phase, testing and improvement works will be finished. Reliability tests will be designed and fulfilled. The more data grab from tests, the more stability of the real-time motion planning system can be implemented. Finally, one reliable motion planning system will be built, which will be used in normal scale EGO vehicles to reduce accident rates significantly under the urban environment.No embargoAcademic Major: Electrical and Computer Engineerin
    corecore