108,875 research outputs found

    Activity monitoring of people in buildings using distributed smart cameras

    Get PDF
    Systems for monitoring the activity of people inside buildings (e.g., how many people are there, where are they, what are they doing, etc.) have numerous (potential) applications including domotics (control of lighting, heating, etc.), elderly-care (gathering statistics on the daily live) and video teleconferencing. We will discuss the key challenges and present the preliminary results of our ongoing research on the use of distributed smart cameras for activity monitoring of people in buildings. The emphasis of our research is on: - the use of smart cameras (embedded devices): video is processed locally (distributed algorithms), and only meta-data is send over the network (minimal data exchange) - camera collaboration: cameras with overlapping views work together in a network in order to increase the overall system performance - robustness: system should work in real conditions (e.g., robust to lighting changes) Our research setup consists of cameras connected to PCs (to simulate smart cameras), each connected to one central PC. The system builds in real-time an occupancy map of a room (indicating the positions of the people in the room) by fusing the information from the different cameras in a Dempster-Shafer framework

    Using smart phones for deformations measurements of structures

    Get PDF
    The present work tests the suitability of using the digital cameras of smart phones for close range photogrammetry applications. For this purpose two cameras of smart phones Lumia 535 and Lumia 950 XL were used. The resolutions of the two cameras are 5 and 20 Mpixels respectively. The tests consist of (a) self calibration of the two cameras, (b) the implementation of close-range photogrammetry using the cameras of the two smart phones, theodolite intersection with LST method, and linear variable displacement transducers (LVDTs) for the measurement of vertical deflections, and (c) accuracy of photogrammetric determination of object space coordinates. The results of using Lumia 950 XL are much better than using Lumia 535 and are better or comparable to the results of theodolite intersection with least squares technique (LST). Finally, it can be stated that the digital cameras of smart phones are suitable for close range photogrammetry applications according to accuracy, costs and flexibility

    CamSim:a distributed smart camera network simulator

    Get PDF
    Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use

    Vulnerabilities that Allow to Make Botnets from IoT Devices

    Get PDF
    Nowadays people want each of their devices to be smart and connected to the network. This idea is called Internet of Things (IoT). The list of modern devices that support IoT includes smartphones, watches, appliances, cameras, cars and much more. It allows user to control its house just with one smartphone

    Design of an FPGA-based smart camera and its application towards object tracking : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Electronics and Computer Engineering at Massey University, Manawatu, New Zealand

    Get PDF
    Smart cameras and hardware image processing are not new concepts, yet despite the fact both have existed several decades, not much literature has been presented on the design and development process of hardware based smart cameras. This thesis will examine and demonstrate the principles needed to develop a smart camera on hardware, based on the experiences from developing an FPGA-based smart camera. The smart camera is applied on a Terasic DE0 FPGA development board, using Terasic’s 5 megapixel GPIO camera. The algorithm operates at 120 frames per second at a resolution of 640x480 by utilising a modular streaming approach. Two case studies will be explored in order to demonstrate the development techniques established in this thesis. The first case study will develop the global vision system for a robot soccer implementation. The algorithm will identify and calculate the positions and orientations of each robot and the ball. Like many robot soccer implementations each robot has colour patches on top to identify each robot and aid finding its orientation. The ball is comprised of a single solid colour that is completely distinct from the colour patches. Due to the presence of uneven light levels a YUV-like colour space labelled YC1C2 is used in order to make the colour values more light invariant. The colours are then classified using a connected components algorithm to segment the colour patches. The shapes of the classified patches are then used to identify the individual robots, and a CORDIC function is used to calculate the orientation. The second case study will investigate an improved colour segmentation design. A new HSY colour space is developed by remapping the Cartesian coordinate system from the YC1C2 to a polar coordinate system. This provides improved colour segmentation results by allowing for variations in colour value caused by uneven light patterns and changing light levels

    Demo: real-time indoors people tracking in scalable camera networks

    Get PDF
    In this demo we present a people tracker in indoor environments. The tracker executes in a network of smart cameras with overlapping views. Special attention is given to real-time processing by distribution of tasks between the cameras and the fusion server. Each camera performs tasks of processing the images and tracking of people in the image plane. Instead of camera images, only metadata (a bounding box per person) are sent from each camera to the fusion server. The metadata are used on the server side to estimate the position of each person in real-world coordinates. Although the tracker is designed to suit any indoor environment, in this demo the tracker's performance is presented in a meeting scenario, where occlusions of people by other people and/or furniture are significant and occur frequently. Multiple cameras insure views from multiple angles, which keeps tracking accurate even in cases of severe occlusions in some of the views

    Enabling Image Recognition on Constrained Devices Using Neural Network Pruning and a CycleGAN

    Get PDF
    Smart cameras are increasingly used in surveillance solutions in public spaces. Contemporary computer vision applications can be used to recognize events that require intervention by emergency services. Smart cameras can be mounted in locations where citizens feel particularly unsafe, e.g., pathways and underpasses with a history of incidents. One promising approach for smart cameras is edge AI, i.e., deploying AI technology on IoT devices. However, implementing resource-demanding technology such as image recognition using deep neural networks (DNN) on constrained devices is a substantial challenge. In this paper, we explore two approaches to reduce the need for compute in contemporary image recognition in an underpass. First, we showcase successful neural network pruning, i.e., we retain comparable classification accuracy with only 1.1% of the neurons remaining from the state-of-the-art DNN architecture. Second, we demonstrate how a CycleGAN can be used to transform out-of-distribution images to the operational design domain. We posit that both pruning and CycleGANs are promising enablers for efficient edge AI in smart cameras
    corecore