4,376 research outputs found

    Full-View Coverage Problems in Camera Sensor Networks

    Get PDF
    Camera Sensor Networks (CSNs) have emerged as an information-rich sensing modality with many potential applications and have received much research attention over the past few years. One of the major challenges in research for CSNs is that camera sensors are different from traditional scalar sensors, as different cameras from different positions can form distinct views of the object in question. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera coverage, since the face image (or the targeted aspect) of the object may be missed. The angle between the object\u27s facing direction and the camera\u27s viewing direction is used to measure the quality of sensing in CSNs instead. This distinction makes the coverage verification and deployment methodology dedicated to conventional sensor networks unsuitable. A new coverage model called full-view coverage can precisely characterize the features of coverage in CSNs. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera\u27s viewing direction is sufficiently close to the object\u27s facing direction. In this dissertation, we consider three areas of research for CSNS: 1. an analytical theory for full-view coverage; 2. energy efficiency issues in full-view coverage CSNs; 3. Multi-dimension full-view coverage theory. For the first topic, we propose a novel analytical full-view coverage theory, where the set of full-view covered points is produced by numerical methodology. Based on this theory, we solve the following problems. First, we address the full-view coverage holes detection problem and provide the healing solutions. Second, we propose kk-Full-View-Coverage algorithms in camera sensor networks. Finally, we address the camera sensor density minimization problem for triangular lattice based deployment in full-view covered camera sensor networks, where we argue that there is a flaw in the previous literature, and present our corresponding solution. For the second topic, we discuss lifetime and full-view coverage guarantees through distributed algorithms in camera sensor networks. Another energy issue we discuss is about object tracking problems in full-view coverage camera sensor networks. Next, the third topic addresses multi-dimension full-view coverage problem where we propose a novel 3D full-view coverage model, and we tackle the full-view coverage optimization problem in order to minimize the number of camera sensors and demonstrate a valid solution. This research is important due to the numerous applications for CSNs. Especially some deployment can be in remote locations, it is critical to efficiently obtain accurate meaningful data

    Belt-Barrier Construction Algorithm for WVSNs

    Get PDF
    [[abstract]]Previous research of barrier coverage did not consider breadth of coverage in Wireless Visual Sensor Networks (WVSNs). In this paper, we consider breadth to increase the Quality of Monitor (QoM) of WVSNs. The proposed algorithm is called Distributed β-Breadth Belt-Barrier construction algorithm (D-TriB). D-TriB constructs a belt-barrier with β breadth to offer β level of QoM, we call β-QoM. D-TriB can not only reduce the number of camera sensors required to construct a barrier but also ensure that any barrier with β-QoM in the network can be identified. Finally, the successful rate of the proposed algorithm is evaluated through simulations.[[incitationindex]]EI[[conferencetype]]國際[[conferencedate]]20120401~20120404[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]Shanghai, Chin

    On Barrier Coverage in Wireless Camera Sensor Networks

    Get PDF
    [[abstract]]The paper proposed a distributed algorithm, namely CoBRA (Cone-based Barrier coveRage Algorithm), to achieve barrier coverage in wireless camera sensor networks (WCSNs). To the best understanding, CoBRA is the first algorithm which try to deal with the barrier coverage issue in WCSNs. Based on some observations, the basic concept of CoBRA is that each camera sensor can determine the local possible barrier lines according to the geographical relations with their neighbors. A sink in a WCSN initiates Barrier Request (BREQ) messages to form the possible barrier lines. Afterward, a barrier line is constructed by the Barrier Reply (BREQ) message initiated by another sink. CoBRA mainly includes three phases: Initial Phase, Candidate Selection Phase, and Decision Phase. In the Initial Phase, each camera sensor collects the local information of its neighbors and estimates the possible barrier lines. In the Candidate Selection Phase, a sink initiates the BREQ packets and forwards the BREQ packets to camera sensors. Camera sensors receiving the BREQ then reforward the BREQ packets to its neighbors who are capable of forming a barrier line. All camera sensors receiving the BREQ will forward the BREQ to their neighbors again in the same manner. Finally, in the decision phase, after the BREQ message is transmitted through the whole monitoring area, a BREP message is used by the sink to select a barrier line in a WCSN. The barrier coverage is achieved by finding the barrier line in the monitoring area. Experiment results show that CoBRA can efficiently achieve barrier coverage in WCSNs. Comparing to the ideal results, CoBRA can use fewer nodes to accomplish barrier coverage in random deployment scenarios.[[conferencetype]]國際[[conferencedate]]20100420~20100423[[iscallforpapers]]Y[[conferencelocation]]Perth, WA, Australi

    Placement, visibility and coverage analysis of dynamic pan/tilt/zoom camera sensor networks

    Get PDF
    Multi-camera vision systems have important application in a number of fields, including robotics and security. One interesting problem related to multi-camera vision systems is to determine the effect of camera placement on the quality of service provided by a network of Pan/Tilt/Zoom (PTZ) cameras with respect to a specific image processing application. The goal of this work is to investigate how to place a team of PTZ cameras, potentially used for collaborative tasks, such as surveillance, and analyze the dynamic coverage that can be provided by them. Computational Geometry approaches to various formulations of sensor placement problems have been shown to offer very elegant solutions; however, they often involve unrealistic assumptions about real-world sensors, such as infinite sensing range and infinite rotational speed. Other solutions to camera placement have attempted to account for the constraints of real-world computer vision applications, but offer solutions that are approximations over a discrete problem space. A contribution of this work is an algorithm for camera placement that leverages Computational Geometry principles over a continuous problem space utilizing a model for dynamic camera coverage that is simple, yet representative. This offers a balance between accounting for real-world application constraints and creating a problem that is tractable

    Deployment, Coverage And Network Optimization In Wireless Video Sensor Networks For 3D Indoor Monitoring

    Get PDF
    As a result of extensive research over the past decade or so, wireless sensor networks (wsns) have evolved into a well established technology for industry, environmental and medical applications. However, traditional wsns employ such sensors as thermal or photo light resistors that are often modeled with simple omni-directional sensing ranges, which focus only on scalar data within the sensing environment. In contrast, the sensing range of a wireless video sensor is directional and capable of providing more detailed video information about the sensing field. Additionally, with the introduction of modern features in non-fixed focus cameras such as the pan, tilt and zoom (ptz), the sensing range of a video sensor can be further regarded as a fan-shape in 2d and pyramid-shape in 3d. Such uniqueness attributed to wireless video sensors and the challenges associated with deployment restrictions of indoor monitoring make the traditional sensor coverage, deployment and networked solutions in 2d sensing model environments for wsns ineffective and inapplicable in solving the wireless video sensor network (wvsn) issues for 3d indoor space, thus calling for novel solutions. In this dissertation, we propose optimization techniques and develop solutions that will address the coverage, deployment and network issues associated within wireless video sensor networks for a 3d indoor environment. We first model the general problem in a continuous 3d space to minimize the total number of required video sensors to monitor a given 3d indoor region. We then convert it into a discrete version problem by incorporating 3d grids, which can achieve arbitrary approximation precision by adjusting the grid granularity. Due in part to the uniqueness of the visual sensor directional sensing range, we propose to exploit the directional feature to determine the optimal angular-coverage of each deployed visual sensor. Thus, we propose to deploy the visual sensors from divergent directional angles and further extend k-coverage to ``k-angular-coverage\u27\u27, while ensuring connectivity within the network. We then propose a series of mechanisms to handle obstacles in the 3d environment. We develop efficient greedy heuristic solutions that integrate all these aforementioned considerations one by one and can yield high quality results. Based on this, we also propose enhanced depth first search (dfs) algorithms that can not only further improve the solution quality, but also return optimal results if given enough time. Our extensive simulations demonstrate the superiority of both our greedy heuristic and enhanced dfs solutions. Finally, this dissertation discusses some future research directions such as in-network traffic routing and scheduling issues

    The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey

    Get PDF
    Wireless sensor networks typically consist of a great number of tiny low-cost electronic devices with limited sensing and computing capabilities which cooperatively communicate to collect some kind of information from an area of interest. When wireless nodes of such networks are equipped with a low-power camera, visual data can be retrieved, facilitating a new set of novel applications. The nature of video-based wireless sensor networks demands new algorithms and solutions, since traditional wireless sensor networks approaches are not feasible or even efficient for that specialized communication scenario. The coverage problem is a crucial issue of wireless sensor networks, requiring specific solutions when video-based sensors are employed. In this paper, it is surveyed the state of the art of this particular issue, regarding strategies, algorithms and general computational solutions. Open research areas are also discussed, envisaging promising investigation considering coverage in video-based wireless sensor networks
    corecore