5,472 research outputs found

    Video analytics system for surveillance videos

    Get PDF
    Developing an intelligent inspection system that can enhance the public safety is challenging. An efficient video analytics system can help monitor unusual events and mitigate possible damage or loss. This thesis aims to analyze surveillance video data, report abnormal activities and retrieve corresponding video clips. The surveillance video dataset used in this thesis is derived from ALERT Dataset, a collection of surveillance videos at airport security checkpoints. The video analytics system in this thesis can be thought as a pipelined process. The system takes the surveillance video as input, and passes it through a series of processing such as object detection, multi-object tracking, person-bin association and re-identification. In the end, we can obtain trajectories of passengers and baggage in the surveillance videos. Abnormal events like taking away other's belongings will be detected and trigger the alarm automatically. The system could also retrieve the corresponding video clips based on user-defined query

    Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles,” Int

    Get PDF
    Abstract—This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV

    CMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends

    Get PDF
    CMOS Image Sensors (CIS) are key for imaging technol-ogies. These chips are conceived for capturing opticalscenes focused on their surface, and for delivering elec-trical images, commonly in digital format. CISs may incor-porate intelligence; however, their smartness basicallyconcerns calibration, error correction and other similartasks. The term CVISs (CMOS VIsion Sensors) definesother class of sensor front-ends which are aimed at per-forming vision tasks right at the focal plane. They havebeen running under names such as computational imagesensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical imple-mentation. However, while inputs of both CIS and CVISare images captured by photo-sensors placed at thefocal-plane, CVISs primary outputs may not be imagesbut either image features or even decisions based on thespatial-temporal analysis of the scenes. We may hencestate that CVISs are more “intelligent” than CISs as theyfocus on information instead of on raw data. Actually,CVIS architectures capable of extracting and interpretingthe information contained in images, and prompting reac-tion commands thereof, have been explored for years inacademia, and industrial applications are recently ramp-ing up.One of the challenges of CVISs architects is incorporat-ing computer vision concepts into the design flow. Theendeavor is ambitious because imaging and computervision communities are rather disjoint groups talking dif-ferent languages. The Cellular Nonlinear Network Univer-sal Machine (CNNUM) paradigm, proposed by Profs.Chua and Roska, defined an adequate framework forsuch conciliation as it is particularly well suited for hard-ware-software co-design [1]-[4]. This paper overviewsCVISs chips that were conceived and prototyped at IMSEVision Lab over the past twenty years. Some of them fitthe CNNUM paradigm while others are tangential to it. Allthem employ per-pixel mixed-signal processing circuitryto achieve sensor-processing concurrency in the quest offast operation with reduced energy budget.Junta de Andalucía TIC 2012-2338Ministerio de Economía y Competitividad TEC 2015-66878-C3-1-R y TEC 2015-66878-C3-3-

    Key technologies for safe and autonomous drones

    Get PDF
    Drones/UAVs are able to perform air operations that are very difficult to be performed by manned aircrafts. In addition, drones' usage brings significant economic savings and environmental benefits, while reducing risks to human life. In this paper, we present key technologies that enable development of drone systems. The technologies are identified based on the usages of drones (driven by COMP4DRONES project use cases). These technologies are grouped into four categories: U-space capabilities, system functions, payloads, and tools. Also, we present the contributions of the COMP4DRONES project to improve existing technologies. These contributions aim to ease drones’ customization, and enable their safe operation.This project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 826610. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Austria, Belgium, Czech Republic, France, Italy, Latvia, Netherlands. The total project budget is 28,590,748.75 EUR (excluding ESIF partners), while the requested grant is 7,983,731.61 EUR to ECSEL JU, and 8,874,523.84 EUR of National and ESIF Funding. The project has been started on 1st October 2019

    Smart home Management System with Face Recognition Based on ArcFace Model in Deep Convolutional Neural Network

    Get PDF
    In recent years, artificial intelligence has proved its potential in many fields, especially in computer vision. Facial recognition is one of the most essential tasks in the field of computer vision with various prospective applications from academic research to intelligence service. In this paper, we propose an efficient deep learning approach to facial recognition. Our approach utilizes the architecture of ArcFace model based on the backbone MobileNet V2, in deep convolutional neural network (DCNN). Assistive techniques to increase highly distinguishing features in facial recognition. With the supports of the facial authentication combines with hand gestures recognition, users will be able to monitor and control his home through his mobile phone/tablet/PC. Moreover, they communicate with data and connect to smart devices easily through IoT technology. The overall proposed model is 97% of accuracy and a processing speed of 25 FPS. The interface of the smart home demonstrates the successful functions of real-time operations

    “Design, Development and Characterization of a Thermal Sensor Brick System for Modular Robotics

    Get PDF
    This thesis presents the work on thermal imaging sensor brick (TISB) system for modular robotics. The research demonstrates the design, development and characterization of the TISB system. The TISB system is based on the design philosophy of sensor bricks for modular robotics. In under vehicle surveillance for threat detection, which is a target application of this work we have demonstrated the advantages of the TISB system over purely vision-based systems. We have highlighted the advantages of the TISB system as an illumination invariant threat detection system for detecting hidden threat objects in the undercarriage of a car. We have compared the TISB system to the vision sensor brick system and the mirror on a stick. We have also illustrated the operational capability of the system on the SafeBot under vehicle robot to acquire and transmit the data wirelessly. The early designs of the TISB system, the evolution of the designs and the uniformity achieved while maintaining the modularity in building the different sensor bricks; the visual, the thermal and the range sensor brick is presented as part of this work. Each of these sensor brick systems designed and implemented at the Imaging Robotics and Intelligent Systems (IRIS) laboratory consist of four major blocks: Sensing and Image Acquisition Block, Pre-Processing and Fusion Block, Communication Block, and Power Block. The Sensing and Image Acquisition Block is to capture images or acquire data. The Pre-Processing and Fusion Block is to work on the acquired images or data. The Communication Block is for transferring data between the sensor brick and the remote host computer. The Power Block is to maintain power supply to the entire brick. The modular sensor bricks are self-sufficient plug and play systems. The SafeBot under vehicle robot designed and implemented at the IRIS laboratory has two tracked platforms one on each side with a payload bay area in the middle. Each of these tracked platforms is a mobility brick based on the same design philosophy as the modular sensor bricks. The robot can carry one brick at a time or even multiple bricks at the same time. The contributions of this thesis are: (1) designing and developing the hardware implementation of the TISB system, (2) designing and developing the software for the TISB system, and (3) characterizing the TISB system, where this characterization of the system is the major contribution of this thesis. The analysis of the thermal sensor brick system provides the user and future designers with sufficient information on parameters to be considered to make the right choice for future modifications, the kind of applications the TISB could handle and the load that the different blocks of the TISB system could manage. Under vehicle surveillance for threat detection, perimeter / area surveillance, scouting, and improvised explosive device (IED) detection using a car-mounted system are some of the applications that have been identified for this system

    Software Porting of a 3D Reconstruction Algorithm to Razorcam Embedded System on Chip

    Get PDF
    A method is presented to calculate depth information for a UAV navigation system from Keypoints in two consecutive image frames using a monocular camera sensor as input and the OpenCV library. This method was first implemented in software and run on a general-purpose Intel CPU, then ported to the RazorCam Embedded Smart-Camera System and run on an ARM CPU onboard the Xilinx Zynq-7000. The results of performance and accuracy testing of the software implementation are then shown and analyzed, demonstrating a successful port of the software to the RazorCam embedded system on chip that could potentially be used onboard a UAV with tight constraints of size, weight, and power. The potential impacts will be seen through the continuation of this research in the Smart ES lab at University of Arkansas
    corecore