2,010 research outputs found

    Self-localizing Smart Cameras and Their Applications

    Get PDF
    As the prices of cameras and computing elements continue to fall, it has become increasingly attractive to consider the deployment of smart camera networks. These networks would be composed of small, networked computers equipped with inexpensive image sensors. Such networks could be employed in a wide range of applications including surveillance, robotics and 3D scene reconstruction. One critical problem that must be addressed before such systems can be deployed effectively is the issue of localization. That is, in order to take full advantage of the images gathered from multiple vantage points it is helpful to know how the cameras in the scene are positioned and oriented with respect to each other. To address the localization problem we have proposed a novel approach to localizing networks of embedded cameras and sensors. In this scheme the cameras and the nodes are equipped with controllable light sources (either visible or infrared) which are used for signaling. Each camera node can then automatically determine the bearing to all the nodes that are visible from its vantage point. By fusing these measurements with the measurements obtained from onboard accelerometers, the camera nodes are able to determine the relative positions and orientations of other nodes in the network. This localization technology can serve as a basic capability on which higher level applications can be built. The method could be used to automatically survey the locations of sensors of interest, to implement distributed surveillance systems or to analyze the structure of a scene based on the images obtained from multiple registered vantage points. It also provides a mechanism for integrating the imagery obtained from the cameras with the measurements obtained from distributed sensors. We have successfully used our custom made self localizing smart camera networks to implement a novel decentralized target tracking algorithm, create an ad-hoc range finder and localize the components of a self assembling modular robot

    Vision Based Calibration and Localization Technique for Video Sensor Networks

    Get PDF
    The recent evolutions in embedded systems have now made the video sensor networks a reality. A video sensor network consists of a large number of low cost camera-sensors that are deployed in random manner. It pervades both the civilian and military fields with huge number of applications in various areas like health-care, environmental monitoring, surveillance and tracking. As most of the applications demand the knowledge of the sensor-locations and the network topology before proceeding with their tasks, especially those based on detecting events and reporting, the problem of localization and calibration assumes a significance far greater than most others in video sensor network. The literature is replete with many localization and calibration algorithms that basically rely on some a-priori chosen nodes, called seeds, with known coordinates to help determine the network topology. Some of these algorithms require additional hardware, like arrays of antenna, while others require having to regularly reacquire synchronization among the seeds so as to calculate the time difference of the received signals. Very few of these localization algorithms use vision based technique. In this work, a vision based technique is proposed for localizing and configuring the camera nodes in video wireless sensor networks. The camera network is assumed randomly deployed. One a-priori selected node chooses to act as the core of the network and starts to locate some other two reference nodes. These three nodes, in turn, participate in locating the entire network using tri-lateration method with some appropriate vision characteristics. In this work, the vision characteristics that are used the relationship between the height of the image in the image plane and the real distance between the sensor node and the camera. Many experiments have been simulated to demonstrate the feasibility of the proposed technique. Apart from this work, experiments are also carried out to locate any other new object in the video sensor network. The experimental results showcase the accuracy of building up one-plane network topology in relative coordinate system and also the robustness of the technique against the accumulated error in configuring the whole network

    Using Smart Cameras to Localize Self-Assembling Modular Robots

    Get PDF
    In order to realize the goal of self assembling or self reconfiguring modular robots the constituent modules in the system need to be able to gauge their position and orientation with respect to each other. This paper describes an approach to solving this localization problem by equipping each of the modules in the ensemble with a smart camera system. The paper describes one implementation of this scheme on a modular robotic system and discusses the results of a self assembly experiment

    Vision Based Calibration and Localization Technique for Video Sensor Networks

    Get PDF
    The recent evolutions in embedded systems have now made the video sensor networks a reality. A video sensor network consists of a large number of low cost camera-sensors that are deployed in random manner. It pervades both the civilian and military fields with huge number of applications in various areas like health-care, environmental monitoring, surveillance and tracking. As most of the applications demand the knowledge of the sensor-locations and the network topology before proceeding with their tasks, especially those based on detecting events and reporting, the problem of localization and calibration assumes a significance far greater than most others in video sensor network. The literature is replete with many localization and calibration algorithms that basically rely on some a-priori chosen nodes, called seeds, with known coordinates to help determine the network topology. Some of these algorithms require additional hardware, like arrays of antenna, while others require having to regularly reacquire synchronization among the seedy so as to calculate the time difference of the received signals. Very few of these localization algorithms use vision based technique. In this work, a vision based technique is proposed for localizing and configuring the camera nodes in video wireless sensor networks. The camera network is assumed randomly deployed. One a-priori selected node chooses to act as the core of the network and starts to locate some other two reference nodes. These three nodes, in turn, participate in locating the entire network using tri-lateration method with some appropriate vision characteristics. In this work, the vision characteristics that are used the relationship between the height of the image in the image plane and the real distance between the sensor node and the camera. Many experiments have been simulated to demonstrate the feasibility of the proposed technique. Apart from this work, experiments are also carried out to locate any other new object in the video sensor network. The experimental results showcase the accuracy of building up one-plane network topology in relative coordinate system and also the robustness of the technique against the accumulated error in configuring the whole network

    Indoor Localization Based on Wireless Sensor Networks

    Get PDF
    Indoor localization techniques based on wireless sensor networks (WSNs) have been increasingly used in various applications such as factory automation, intelligent building, facility management, security, and health care. However, existing localization techniques cannot meet the accuracy requirement of many applications. Meanwhile, some localization algorithms are affected by environmental conditions and cannot be directly used in an indoor environment. Cost is another limitation of the existing localization algorithms. This thesis is to address those issues of indoor localization through a new Sensing Displacement (SD) approach. It consists of four major parts: platform design, SD algorithm development, SD algorithm improvement, and evaluation. Platform design includes hardware design and software design. Hardware design is the foundation for the system, which consists of the motion sensors embedded on mobile nodes and WSN design. Motion sensors are used to collect motion information for the localizing objects. A WSN is designed according to the characteristics of an indoor scenario. A Cloud Computing based system architecture is developed to support the software design of the proposed system. In order to address the special issues in an indoor environment, a new Sensing Displacement algorithm is developed, which estimates displacement of a node based on the motion information from the sensors embedded on the node. The sensor assembly consists of acceleration sensors and gyroscope sensors, separately sensing the acceleration and angular velocity of the localizing object. The first SD algorithm is designed in a way to be used in a 2-D localization demo to validate the proposal. A detailed analysis of the results of 2-D SD algorithm reveals that there are two critical issues (sensor’s noise and cumulative error) affecting the measurement results. Therefore a low-pass filter and a modified Kalman filter are introduced to solve the issue of sensor’s noises. An inertia tensor factor is introduced to address the cumulative error in a 3-D SD algorithm. Finally, the proposed SD algorithm is evaluated against the commercial AeroScout (WiFi-RFID) system and the ZigBee based Fingerprint algorithm

    Technologies and solutions for location-based services in smart cities: past, present, and future

    Get PDF
    Location-based services (LBS) in smart cities have drastically altered the way cities operate, giving a new dimension to the life of citizens. LBS rely on location of a device, where proximity estimation remains at its core. The applications of LBS range from social networking and marketing to vehicle-toeverything communications. In many of these applications, there is an increasing need and trend to learn the physical distance between nearby devices. This paper elaborates upon the current needs of proximity estimation in LBS and compares them against the available Localization and Proximity (LP) finding technologies (LP technologies in short). These technologies are compared for their accuracies and performance based on various different parameters, including latency, energy consumption, security, complexity, and throughput. Hereafter, a classification of these technologies, based on various different smart city applications, is presented. Finally, we discuss some emerging LP technologies that enable proximity estimation in LBS and present some future research areas
    • …
    corecore