2,153 research outputs found

    Deployment, Coverage And Network Optimization In Wireless Video Sensor Networks For 3D Indoor Monitoring

    Get PDF
    As a result of extensive research over the past decade or so, wireless sensor networks (wsns) have evolved into a well established technology for industry, environmental and medical applications. However, traditional wsns employ such sensors as thermal or photo light resistors that are often modeled with simple omni-directional sensing ranges, which focus only on scalar data within the sensing environment. In contrast, the sensing range of a wireless video sensor is directional and capable of providing more detailed video information about the sensing field. Additionally, with the introduction of modern features in non-fixed focus cameras such as the pan, tilt and zoom (ptz), the sensing range of a video sensor can be further regarded as a fan-shape in 2d and pyramid-shape in 3d. Such uniqueness attributed to wireless video sensors and the challenges associated with deployment restrictions of indoor monitoring make the traditional sensor coverage, deployment and networked solutions in 2d sensing model environments for wsns ineffective and inapplicable in solving the wireless video sensor network (wvsn) issues for 3d indoor space, thus calling for novel solutions. In this dissertation, we propose optimization techniques and develop solutions that will address the coverage, deployment and network issues associated within wireless video sensor networks for a 3d indoor environment. We first model the general problem in a continuous 3d space to minimize the total number of required video sensors to monitor a given 3d indoor region. We then convert it into a discrete version problem by incorporating 3d grids, which can achieve arbitrary approximation precision by adjusting the grid granularity. Due in part to the uniqueness of the visual sensor directional sensing range, we propose to exploit the directional feature to determine the optimal angular-coverage of each deployed visual sensor. Thus, we propose to deploy the visual sensors from divergent directional angles and further extend k-coverage to ``k-angular-coverage\u27\u27, while ensuring connectivity within the network. We then propose a series of mechanisms to handle obstacles in the 3d environment. We develop efficient greedy heuristic solutions that integrate all these aforementioned considerations one by one and can yield high quality results. Based on this, we also propose enhanced depth first search (dfs) algorithms that can not only further improve the solution quality, but also return optimal results if given enough time. Our extensive simulations demonstrate the superiority of both our greedy heuristic and enhanced dfs solutions. Finally, this dissertation discusses some future research directions such as in-network traffic routing and scheduling issues

    On realistic target coverage by autonomous drones

    Get PDF
    Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. To achieve the full potential of such drones, it is necessary to develop new enhanced formulations of both common and emerging sensing scenarios. Namely, several fundamental challenges in visual sensing are yet to be solved including (1) fitting sizable targets in camera frames; (2) positioning cameras at effective viewpoints matching target poses; and (3) accounting for occlusion by elements in the environment, including other targets. In this article, we introduce Argus, an autonomous system that utilizes drones to collect target information incrementally through a two-tier architecture. To tackle the stated challenges, Argus employs a novel geometric model that captures both target shapes and coverage constraints. Recognizing drones as the scarcest resource, Argus aims to minimize the number of drones required to cover a set of targets. We prove this problem is NP-hard, and even hard to approximate, before deriving a best-possible approximation algorithm along with a competitive sampling heuristic which runs up to 100× faster according to large-scale simulations. To test Argus in action, we demonstrate and analyze its performance on a prototype implementation. Finally, we present a number of extensions to accommodate more application requirements and highlight some open problems

    Obstacle-Aware Wireless Video Sensor Network Deployment For 3D Indoor Space Monitoring

    Get PDF
    In recent years wireless video sensors networks (WVSNs) have emerged as a leading technology for monitoring 3D indoor space in campus, industrial and medical areas as well as other types of environments. In contrast to traditional sensors such as heat or light sensors often considered with omnidirectional sensing range, the sensing range of a video sensor is directional and can be deemed as a pyramid-shape in 3D. Moreover, in an indoor environment, there are often obstacles such as lamp stands or furniture, which introduce additional challenges and further render the deployment solutions for traditional sensors and 2D sensing field inapplicable or incapable of solving the WVSN deployment problem for 3D indoor space monitoring. In this thesis, we take the first attempt to address this by modeling the general problem in a continuous space and strive to minimize the number of required video sensors to cover the given 3D regions. We then convert it into a discrete version by incorporating 3D grids for our discrete model, which can achieve arbitrary approximation precision by adjusting the grid granularity. We also create two strategies for dealing with stationary obstacles existed in the 3D indoor space, namely, Divide and Conquer Detection strategy and Accurate Detection strategy. We propose a greedy heuristic and an enhanced Depth First Search (DFS) algorithm to solve the discrete version problem where the latter, if given enough time can return the optimal solution. We evaluate our solutions with a customized simulator that can emulate the actual WVSN deployment and 3D indoor space coverage. The evaluation results demonstrate that our greedy heuristic can reduce the required video sensors by up to 47% over a baseline algorithm, and our enhanced DFS can achieve an additional reduction of video sensors by up to 25%

    A wireless sensor network-based approach to large-scale dimensional metrology

    No full text
    In many branches of industry, dimensional measurements have become an important part of the production cycle, in order to check product compliance with specifications. This task is not trivial especially when dealing with largescale dimensional measurements: the bigger the measurement dimensions are, the harder is to achieve high accuracies. Nowadays, the problem can be handled using many metrological systems, based on different technologies (e.g. optical, mechanical, electromagnetic). Each of these systems is more or less adequate, depending upon measuring conditions, user's experience and skill, or other factors such as time, cost, accuracy and portability. This article focuses on a new possible approach to large-scale dimensional metrology based on wireless sensor networks. Advantages and drawbacks of such approach are analysed and deeply discussed. Then, the article briefly presents a recent prototype system - the Mobile Spatial Coordinate-Measuring System (MScMS-II) - which has been developed at the Industrial Metrology and Quality Laboratory of DISPEA - Politecnico di Torino. The system seems to be suitable for performing dimensional measurements of large-size objects (sizes on the order of several meters). Owing to its distributed nature, the system - based on a wireless network of optical devices - is portable, fully scalable with respect to dimensions and shapes and easily adaptable to different working environments. Preliminary results of experimental tests, aimed at evaluating system performance as well as research perspectives for further improvements, are discusse

    PHALANX: Expendable Projectile Sensor Networks for Planetary Exploration

    Get PDF
    Technologies enabling long-term, wide-ranging measurement in hard-to-reach areas are a critical need for planetary science inquiry. Phenomena of interest include flows or variations in volatiles, gas composition or concentration, particulate density, or even simply temperature. Improved measurement of these processes enables understanding of exotic geologies and distributions or correlating indicators of trapped water or biological activity. However, such data is often needed in unsafe areas such as caves, lava tubes, or steep ravines not easily reached by current spacecraft and planetary robots. To address this capability gap, we have developed miniaturized, expendable sensors which can be ballistically lobbed from a robotic rover or static lander - or even dropped during a flyover. These projectiles can perform sensing during flight and after anchoring to terrain features. By augmenting exploration systems with these sensors, we can extend situational awareness, perform long-duration monitoring, and reduce utilization of primary mobility resources, all of which are crucial in surface missions. We call the integrated payload that includes a cold gas launcher, smart projectiles, planning software, network discovery, and science sensing: PHALANX. In this paper, we introduce the mission architecture for PHALANX and describe an exploration concept that pairs projectile sensors with a rover mothership. Science use cases explored include reconnaissance using ballistic cameras, volatiles detection, and building timelapse maps of temperature and illumination conditions. Strategies to autonomously coordinate constellations of deployed sensors to self-discover and localize with peer ranging (i.e. a local GPS) are summarized, thus providing communications infrastructure beyond-line-of-sight (BLOS) of the rover. Capabilities were demonstrated through both simulation and physical testing with a terrestrial prototype. The approach to developing a terrestrial prototype is discussed, including design of the launching mechanism, projectile optimization, micro-electronics fabrication, and sensor selection. Results from early testing and characterization of commercial-off-the-shelf (COTS) components are reported. Nodes were subjected to successful burn-in tests over 48 hours at full logging duty cycle. Integrated field tests were conducted in the Roverscape, a half-acre planetary analog environment at NASA Ames, where we tested up to 10 sensor nodes simultaneously coordinating with an exploration rover. Ranging accuracy has been demonstrated to be within +/-10cm over 20m using commodity radios when compared to high-resolution laser scanner ground truthing. Evolution of the design, including progressive miniaturization of the electronics and iterated modifications of the enclosure housing for streamlining and optimized radio performance are described. Finally, lessons learned to date, gaps toward eventual flight mission implementation, and continuing future development plans are discussed

    Discovering user mobility and activity in smart lighting environments

    Full text link
    "Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00

    Eyes in the Sky: Decentralized Control for the Deployment of Robotic Camera Networks

    Get PDF
    This paper presents a decentralized control strategy for positioning and orienting multiple robotic cameras to collectively monitor an environment. The cameras may have various degrees of mobility from six degrees of freedom, to one degree of freedom. The control strategy is proven to locally minimize a novel metric representing information loss over the environment. It can accommodate groups of cameras with heterogeneous degrees of mobility (e.g., some that only translate and some that only rotate), and is adaptive to robotic cameras being added or deleted from the group, and to changing environmental conditions. The robotic cameras share information for their controllers over a wireless network using a specially designed multihop networking algorithm. The control strategy is demonstrated in repeated experiments with three flying quadrotor robots indoors, and with five flying quadrotor robots outdoors. Simulation results for more complex scenarios are also presented.United States. Army Research Office. Multidisciplinary University Research Initiative. Scalable (Grant number W911NF-05-1-0219)United States. Office of Naval Research. Multidisciplinary University Research Initiative. Smarts (Grant number N000140911051)National Science Foundation (U.S.). (Grant number EFRI-0735953)Lincoln LaboratoryBoeing CompanyUnited States. Dept. of the Air Force (Contract FA8721-05-C-0002
    corecore