11 research outputs found

    Hierarchical QOS-aware Routing in Multi-tier Multimedia Wireless Sensor Networks

    Get PDF
    International audienceThe Wireless Multimedia Sensor Networks (WMSN) are a particular case of Wireless Sensors Networks (WSN) as they present a lower density, a limited mobility, require more important resources and need QoS control to transport the multimedia streams. In this paper, we propose, starting from a reference architecture of WMSN, a first approach for hierarchical self-organizing routing ensuring a certain level of QoS

    A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    Get PDF
    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime

    Noninteractive Localization of Wireless Camera Sensors with Mobile Beacon

    Full text link

    High-Resolution Images with Minimum Energy Dissipation and Maximum Field-of-View in Camera-Based Wireless Multimedia Sensor Networks

    Get PDF
    High-resolution images with wide field of view are important in realizing many applications of wireless multimedia sensor networks. Previous works that generally use multi-tier topology and provide such images by increasing the capabilities of camera sensor nodes lead to an increase in network cost. On the other hand, the resulting energy consumption is a considerable issue that has not been seriously considered in previous works. In this paper, high-resolution images with wide field of view are generated without increasing the total cost of network and with minimum energy dissipation. This is achieved by using image stitching in WMSNs, designing a two-tier network topology with new structure, and proposing a camera selection algorithm. In the proposed two-tier structure, low cost camera sensor nodes are used only in the lower-tier and sensor nodes without camera are considered in the upper-tier which decreases total network cost as much as possible. Also, since a simplified image stitching method is implemented and a new algorithm for selecting active nodes is utilized, energy dissipation in the network is decreased by applying the proposed methods. The results of simulations supported the preceding statements

    A Priority Rate-Based Routing Protocol for wireless multimedia sensor networks

    Get PDF
    The development of affordable hardware has made it possible to transmit multimedia data over a wireless medium using sensor devices. Deployed sensors span larger geographical areas, generating different kinds of traffic that need to be communicated either in real-time or non-real-time mode to the sink. The tiny sized design of sensor nodes has made them even more attractive in various environments as they can be left unattended for longer periods. Since sensor nodes are equipped with limited resources, newer energy-efficient protocols and architectures are required in order to meet requirements within their limited capabilities when dealing with multimedia data. This is because multimedia applications are characterized by strict quality of service requirements that distinctively differentiate them from other data types during transmission. However, the large volume of data produced by the sensor nodes can easily cause traffic congestion making it difficult to meet these requirements. Congestion has negative impacts on the data transmitted as well as the sensor network at large. Failure to control congestion will affect the quality of multimedia data received at the sink and further shorten the system lifetime. Next generation wireless sensor networks are predicted to deploy a different model where service is allocated to multimedia while bearing congestion in mind. Applying traditional wireless sensor routing algorithms to wireless multimedia sensor networks may lead to high delay and poor visual quality for multimedia applications. In this research, a Priority Rate-Based Routing Protocol (PRRP) that assigns priorities to traffic depending on their service requirements is proposed. PRRP detects congestion by using adaptive random early detection (A-RED) and a priority rate-based adjustment technique to control congestion. We study the performance of our proposed multi-path routing algorithm for real-time traffic when mixed with three non real-time traffic each with a different priority: high, medium or low. Simulation results show that the proposed algorithm performs better when compared to two existing algorithms, PCCP and PBRC-SD, in terms of queueing delay, packet loss and throughput

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    The role of groups in smart camera networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 103-111).Recent research in sensor networks has made it possible to deploy networks of sensors with significant local processing. These sensor networks are revolutionising information collection and processing in many different environments. Often the amount of local data produced by these devices, and their sheer number, makes centralised data processing infeasible. Smart camera networks represent a particular challenge in this regard, partly because of the amount of data produced by each camera, but also because many high level vision algorithms require data from more than one camera. Many distributed algorithms exist that work locally to produce results from a collection of nodes, but as this number grows the algorithm's performance is quickly crippled by the resulting exponential increase in communication overhead. This thesis examines the limits this puts on peer-to-peer cooperation between nodes, and demonstrates how for large networks these can only be circumvented by locally formed organisations of nodes. A local group forming protocol is described that provides a method for nodes to create a bottom-up organisation based purely on local conditions. This allows the formation of a dynamic information network of cooperating nodes, in which a distributed algorithm can organise the communications of its nodes using purely local knowledge to maintain its global network performance.(cont.) Building on recent work using SIFT feature detection, this protocol is demonstrated in a network of smart cameras. Local groups with shared views are established, which allow each camera to locally determine their relative position with others in the network. The result partitions the network into groups of cameras with known visual relationships, which can then be used for further analysis.by Jacky Mallett.Ph.D

    Efficient Support for Application-Specific Video Adaptation

    Get PDF
    As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network. Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video between two resolutions. We examine the limitations of these technologies that prevent them from supporting a large number of resolutions efficiently. We propose several hybrid schemes and study their performance. Among these hybrid schemes, Bonneville, a framework that combines multiple encodings with limited scalability, can make good trade-offs when organizing compressed video to support a wide range of resolutions. Video collection in a sensor network requires adapting video in a multi-hop storeand- forward network and with multiple video sources. This task cannot be supported effectively by existing adaptation technologies, which are designed for real-time streaming applications from a single source over IP-style end-to-end connections. We propose to adapt video in the network instead of at the network edge. We also propose a framework, Steens, to compose adaptation mechanisms on multiple nodes. We design two signaling protocols in Steens to coordinate multiple nodes. Our simulations show that in-network adaptation can use buffer space on intermediate nodes for adaptation and achieve better video quality than conventional network-edge adaptation. Our simulations also show that explicit collaboration among multiple nodes through signaling can improve video quality, waste less bandwidth, and maintain bandwidth-sharing fairness. The implementation of video adaptation in a sensor network requires system support for programmability, retaskability, and high performance. We propose Cascades, a component-based framework, to provide the required support. A prototype implementation of Steens in this framework shows that the performance overhead is less than 5% compared to a hard-coded C implementation

    Piecing together the magic mirror : a software framework to support distributed, interactive applications

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 183-185).Developing applications for distributed platforms can be very difficult and complex. We have developed a software framework to support distributed, interactive, collaborative applications that run on collections of self-organizing, autonomous computational units. We have included modules to aid application programmers with the exchange of messages, development of fault tolerance, and the aggregation of sensor data from multiple sources. We have assumed a mesh network style of computing, where there is no shared clock, no shared memory, and no central point of control. We have built a distributed user input system, and a distributed simulation application based on the framework. We have demonstrated the viability of our application by testing it with users.by Diane E. Hirsh.S.M
    corecore