18,443 research outputs found

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    Localisation of mobile nodes in wireless networks with correlated in time measurement noise.

    Get PDF
    Wireless sensor networks are an inherent part of decision making, object tracking and location awareness systems. This work is focused on simultaneous localisation of mobile nodes based on received signal strength indicators (RSSIs) with correlated in time measurement noises. Two approaches to deal with the correlated measurement noises are proposed in the framework of auxiliary particle filtering: with a noise augmented state vector and the second approach implements noise decorrelation. The performance of the two proposed multi model auxiliary particle filters (MM AUX-PFs) is validated over simulated and real RSSIs and high localisation accuracy is demonstrated

    Vision Based Calibration and Localization Technique for Video Sensor Networks

    Get PDF
    The recent evolutions in embedded systems have now made the video sensor networks a reality. A video sensor network consists of a large number of low cost camera-sensors that are deployed in random manner. It pervades both the civilian and military fields with huge number of applications in various areas like health-care, environmental monitoring, surveillance and tracking. As most of the applications demand the knowledge of the sensor-locations and the network topology before proceeding with their tasks, especially those based on detecting events and reporting, the problem of localization and calibration assumes a significance far greater than most others in video sensor network. The literature is replete with many localization and calibration algorithms that basically rely on some a-priori chosen nodes, called seeds, with known coordinates to help determine the network topology. Some of these algorithms require additional hardware, like arrays of antenna, while others require having to regularly reacquire synchronization among the seeds so as to calculate the time difference of the received signals. Very few of these localization algorithms use vision based technique. In this work, a vision based technique is proposed for localizing and configuring the camera nodes in video wireless sensor networks. The camera network is assumed randomly deployed. One a-priori selected node chooses to act as the core of the network and starts to locate some other two reference nodes. These three nodes, in turn, participate in locating the entire network using tri-lateration method with some appropriate vision characteristics. In this work, the vision characteristics that are used the relationship between the height of the image in the image plane and the real distance between the sensor node and the camera. Many experiments have been simulated to demonstrate the feasibility of the proposed technique. Apart from this work, experiments are also carried out to locate any other new object in the video sensor network. The experimental results showcase the accuracy of building up one-plane network topology in relative coordinate system and also the robustness of the technique against the accumulated error in configuring the whole network

    Vision Based Calibration and Localization Technique for Video Sensor Networks

    Get PDF
    The recent evolutions in embedded systems have now made the video sensor networks a reality. A video sensor network consists of a large number of low cost camera-sensors that are deployed in random manner. It pervades both the civilian and military fields with huge number of applications in various areas like health-care, environmental monitoring, surveillance and tracking. As most of the applications demand the knowledge of the sensor-locations and the network topology before proceeding with their tasks, especially those based on detecting events and reporting, the problem of localization and calibration assumes a significance far greater than most others in video sensor network. The literature is replete with many localization and calibration algorithms that basically rely on some a-priori chosen nodes, called seeds, with known coordinates to help determine the network topology. Some of these algorithms require additional hardware, like arrays of antenna, while others require having to regularly reacquire synchronization among the seedy so as to calculate the time difference of the received signals. Very few of these localization algorithms use vision based technique. In this work, a vision based technique is proposed for localizing and configuring the camera nodes in video wireless sensor networks. The camera network is assumed randomly deployed. One a-priori selected node chooses to act as the core of the network and starts to locate some other two reference nodes. These three nodes, in turn, participate in locating the entire network using tri-lateration method with some appropriate vision characteristics. In this work, the vision characteristics that are used the relationship between the height of the image in the image plane and the real distance between the sensor node and the camera. Many experiments have been simulated to demonstrate the feasibility of the proposed technique. Apart from this work, experiments are also carried out to locate any other new object in the video sensor network. The experimental results showcase the accuracy of building up one-plane network topology in relative coordinate system and also the robustness of the technique against the accumulated error in configuring the whole network

    Breathfinding: A Wireless Network that Monitors and Locates Breathing in a Home

    Full text link
    This paper explores using RSS measurements on many links in a wireless network to estimate the breathing rate of a person, and the location where the breathing is occurring, in a home, while the person is sitting, laying down, standing, or sleeping. The main challenge in breathing rate estimation is that "motion interference", i.e., movements other than a person's breathing, generally cause larger changes in RSS than inhalation and exhalation. We develop a method to estimate breathing rate despite motion interference, and demonstrate its performance during multiple short (3-7 minute) tests and during a longer 66 minute test. Further, for the same experiments, we show the location of the breathing person can be estimated, to within about 2 m average error in a 56 square meter apartment. Being able to locate a breathing person who is not otherwise moving, without calibration, is important for applications in search and rescue, health care, and security
    • …
    corecore