1,740 research outputs found

    EFFICIENT CAMERA SELECTION FOR MAXIMIZED TARGET COVERAGE IN UNDERWATER ACOUSTIC SENSOR NETWORKS

    Get PDF
    In Underwater Acoustic Sensor Networks (UWASNs), cameras have recently been deployed for enhanced monitoring. However, their use has faced several obstacles. Since video capturing and processing consume significant amounts of camera battery power, they are kept in sleep mode and activated only when ultrasonic sensors detect a target. The present study proposes a camera relocation structure in UWASNs to maximize the coverage of detected targets with the least possible vertical camera movement. This approach determines the coverage of each acoustic sensor in advance by getting the most applicable cameras in terms of orientation and frustum of camera in 3-D that are covered by such sensors. Whenever a target is exposed, this information is then used and shared with other sensors that detected the same target. Compared to a flooding-based approach, experiment results indicate that this proposed solution can quickly capture the detected targets with the least camera movement

    The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey

    Get PDF
    Wireless sensor networks typically consist of a great number of tiny low-cost electronic devices with limited sensing and computing capabilities which cooperatively communicate to collect some kind of information from an area of interest. When wireless nodes of such networks are equipped with a low-power camera, visual data can be retrieved, facilitating a new set of novel applications. The nature of video-based wireless sensor networks demands new algorithms and solutions, since traditional wireless sensor networks approaches are not feasible or even efficient for that specialized communication scenario. The coverage problem is a crucial issue of wireless sensor networks, requiring specific solutions when video-based sensors are employed. In this paper, it is surveyed the state of the art of this particular issue, regarding strategies, algorithms and general computational solutions. Open research areas are also discussed, envisaging promising investigation considering coverage in video-based wireless sensor networks

    Data fusion in ubiquitous networked robot systems for urban services

    Get PDF
    There is a clear trend in the use of robots to accomplish services that can help humans. In this paper, robots acting in urban environments are considered for the task of person guiding. Nowadays, it is common to have ubiquitous sensors integrated within the buildings, such as camera networks, and wireless communications like 3G or WiFi. Such infrastructure can be directly used by robotic platforms. The paper shows how combining the information from the robots and the sensors allows tracking failures to be overcome, by being more robust under occlusion, clutter, and lighting changes. The paper describes the algorithms for tracking with a set of fixed surveillance cameras and the algorithms for position tracking using the signal strength received by a wireless sensor network (WSN). Moreover, an algorithm to obtain estimations on the positions of people from cameras on board robots is described. The estimate from all these sources are then combined using a decentralized data fusion algorithm to provide an increase in performance. This scheme is scalable and can handle communication latencies and failures. We present results of the system operating in real time on a large outdoor environment, including 22 nonoverlapping cameras, WSN, and several robots.Universidad Pablo de Olavide. Departamento de Deporte e InformáticaPostprin

    Self-localizing Smart Cameras and Their Applications

    Get PDF
    As the prices of cameras and computing elements continue to fall, it has become increasingly attractive to consider the deployment of smart camera networks. These networks would be composed of small, networked computers equipped with inexpensive image sensors. Such networks could be employed in a wide range of applications including surveillance, robotics and 3D scene reconstruction. One critical problem that must be addressed before such systems can be deployed effectively is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know how the cameras in the scene are positioned and oriented with respect to each other. To address the localization problem we have proposed a novel approach to localizing networks of embedded cameras and sensors. In this scheme the cameras and the nodes are equipped with controllable light sources (either visible or infrared) which are used for signaling. Each camera node can then automatically determine the bearing to all the nodes that are visible from its vantage point. By fusing these measurements with the measurements obtained from onboard accelerometers, the camera nodes are able to determine the relative positions and orientations of other nodes in the network. This localization technology can serve as a basic capability on which higher level applications can be built. The method could be used to automatically survey the locations of sensors of interest, to implement distributed surveillance systems or to analyze the structure of a scene based on the images obtained from multiple registered vantage points. It also provides a mechanism for integrating the imagery obtained from the cameras with the measurements obtained from distributed sensors. We have successfully used our custom made self localizing smart camera networks to implement a novel decentralized target tracking algorithm, create an ad-hoc range finder and localize the components of a self assembling modular robot
    • …
    corecore