38,780 research outputs found

    CamSim:a distributed smart camera network simulator

    Get PDF
    Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use

    Enabling Runtime Self-Coordination of Reconfigurable Embedded Smart Cameras in Distributed Networks

    Get PDF
    Smart camera networks are real-time distributed embedded systems able to perform computer vision using multiple cameras. This new approach is a confluence of four major disciplines (computer vision, image sensors, embedded computing and sensor networks) and has been subject of intensive work in the past decades. The recent advances in computer vision and network communication, and the rapid growing in the field of high-performance computing, especially using reconfigurable devices, have enabled the design of more robust smart camera systems. Despite these advancements, the effectiveness of current networked vision systems (compared to their operating costs) is still disappointing; the main reason being the poor coordination among cameras entities at runtime and the lack of a clear formalism to dynamically capture and address the self-organization problem without relying on human intervention. In this dissertation, we investigate the use of a declarative-based modeling approach for capturing runtime self-coordination. We combine modeling approaches borrowed from logic programming, computer vision techniques, and high-performance computing for the design of an autonomous and cooperative smart camera. We propose a compact modeling approach based on Answer Set Programming for architecture synthesis of a system-on-reconfigurable-chip camera that is able to support the runtime cooperative work and collaboration with other camera nodes in a distributed network setup. Additionally, we propose a declarative approach for modeling runtime camera self-coordination for distributed object tracking in which moving targets are handed over in a distributed manner and recovered in case of node failure

    Embedded middleware for smart camera networks and sensor fusion

    Get PDF
    Abstract Smart cameras are an interesting research field that has evolved over the last decade. In this chapter we focus on the integration of multiple, potentially heterogeneous, smart cameras into a distributed system for computer vision and sensor fusion. An important aspect for every distributed system is the system-level software, also called middleware. Hence, we discuss the requirements on middleware for distributed smart cameras and the services such a middleware has to provide. In our opinion a middleware following the agent-oriented paradigm allows to build flexible and self-organizing applications that encourage a modular design

    Data Aggregation through Web Service Composition in Smart Camera Networks

    Get PDF
    Distributed Smart Camera (DSC) networks are power constrained real-time distributed embedded systems that perform computer vision using multiple cameras. Providing data aggregation techniques that is criti-cal for running complex image processing algorithms on DSCs is a challenging task due to complexity of video and image data. Providing highly desirable SQL APIs for sophisticated query processing in DSC networks is also challenging for similar reasons. Research on DSCs to date have not addressed the above two problems. In this thesis, we develop a novel SOA based middleware framework on a DSC network that uses Distributed OSGi to expose DSC network services as web services. We also develop a novel web service composition scheme that aid in data aggregation and a SQL query interface for DSC net-works that allow sophisticated query processing. We validate our service orchestration concept for data aggregation by providing query primitive for face detection in smart camera network

    Distributed multi-class road user tracking in multi-camera network for smart traffic applications

    Get PDF
    Reliable tracking of road users is one of the important tasks in smart traffic applications. In these applications, a network of cameras is often used to extend the coverage. However, efficient usage of information from cameras which observe the same road user from different view points is seldom explored. In this paper, we present a distributed multi-camera tracker which efficiently uses information from all cameras with overlapping views to accurately track various classes of road users. Our method is designed for deployment on smart camera networks so that most computer vision tasks are executed locally on smart cameras and only concise high-level information is sent to a fusion node for global joint tracking. We evaluate the performance of our tracker on a challenging real-world traffic dataset in an aspect of Turn Movement Count (TMC) application and achieves high accuracy of 93%and 83% on vehicles and cyclist respectively. Moreover, performance testing in anomaly detection shows that the proposed method provides reliable detection of abnormal vehicle and pedestrian trajectories

    Autonomous Multicamera Tracking on Embedded Smart Cameras

    Get PDF
    There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus

    A Visual Sensor Network for Parking Lot Occupancy Detection in Smart Cities

    Get PDF
    Technology is quickly revolutionizing our everyday lives, helping us to perform complex tasks. The Internet of Things (IoT) paradigm is getting more and more popular and is key to the development of Smart Cities. Among all the applications of IoT in the context of Smart Cities, real-time parking lot occupancy detection recently gained a lot of attention. Solutions based on computer vision yield good performance in terms of accuracy and are deployable on top of visual sensor networks. Since the problem of detecting vacant parking lots is usually distributed over multiple cameras, adhoc algorithms for content acquisition and transmission are to be devised. A traditional paradigm consists in acquiring and encoding images or videos and transmitting them to a central controller, which is responsible for analyzing such content. A novel paradigm, which moves part of the analysis to sensing devices, is quickly becoming popular. We propose a system for distributed parking lot occupancy detection based on the latter paradigm, showing that onboard analysis and transmission of simple features yield better performance with respect to the traditional paradigm in terms of the overall rate-energy-accuracy performance

    WiseNET: smart camera network combined with ontological reasoning for smart building management

    No full text
    International audienceVisual sensor networks (VSN) have become a part of our daily life [1] [2]. Based on our experience we have identified two main problems on VSN. Firstly, the problem of selecting relevant information from the huge amount of data given by the network. Secondly, the problem of integrating the information coming from the different nodes of the network, i.e., linking the different informations together in order to take a decision. These problems can be overcome by including smart cameras in charge of extracting the significant information from the scene and by adding contextual semantic information, i.e., semantic information of what the camera observes, building information and events that may occurred. Semantic information coming from different nodes can be easily integrated in an ontology. Our approach differs from standard computer vision, which deals with algorithm improvement [3] [4] and signal processing problems [5], by dealing with a meaning problem in computer vision, where we try to improve and understand what the camera " sees " by adding contextual semantic information. We developed an innovative distributed system that combines smart cameras with semantic web technology. The proposed system is context sensitive and provides knowledge and logic rules in order to optimize the usage of a smart camera network. The main application of our system is smart building management, where we specifically focus on improving the services of the building users. The WiseNET (Wise Network) system consists of a smart camera network connected to an ontological model. The communication between the smart camera network and the ontological model is bidirectional, i.e., the cameras can send information either when the model asks for it or whenever new data becomes available. The ontological model is a semantical one that allow us to express information in our system and to take decisions according to combinations of the different information [6]. The semantical model is articulated in three sections: sensor, environment and application. All the sections are bilaterally connected between themselves by properties and relations. The sensor section consists of a semantic web vocabulary concerning the smart camera, the image processing algorithms and their results [7]. The sensor section is in charge of giving a semantic meaning to what the smart cameras observes, a problem known as semantic gap [8]. The environment section is composed by a semantic web vocabulary regarding the building information model (BIM) [9]. Finally, the application section comprises a set of rules defining some events that may be important for security applications and the different decisions to take according to the occurrence of these events [10]

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator
    corecore