278 research outputs found

    Hybridisation of GNSS with other wireless/sensors technologies onboard smartphones to offer seamless outdoors-indoors positioning for LBS applications

    Get PDF
    Location-based services (LBS) are becoming an important feature on today’s smartphones (SPs) and tablets. Likewise, SPs include many wireless/sensors technologies such as: global navigation satellite system (GNSS), cellular, wireless fidelity (WiFi), Bluetooth (BT) and inertial-sensors that increased the breadth and complexity of such services. One of the main demand of LBS users is always/seamless positioning service. However, no single onboard SPs technology can seamlessly provide location information from outdoors into indoors. In addition, the required location accuracy can be varied to support multiple LBS applications. This is mainly due to each of these onboard wireless/sensors technologies has its own capabilities and limitations. For example, when outdoors GNSS receivers on SPs can locate the user to within few meters and supply accurate time to within few nanoseconds (e.g. ± 6 nanoseconds). However, when SPs enter into indoors this capability would be lost. In another vain, the other onboard wireless/sensors technologies can show better SP positioning accuracy, but based on some pre-defined knowledge and pre-installed infrastructure. Therefore, to overcome such limitations, hybrid measurements of these wireless/sensors technologies into a positioning system can be a possible solution to offer seamless localisation service and to improve location accuracy. This thesis aims to investigate/design/implement solutions that shall offer seamless/accurate SPs positioning and at lower cost than the current solutions. This thesis proposes three novel SPs localisation schemes including WAPs synchronisation/localisation scheme, SILS and UNILS. The schemes are based on hybridising GNSS with WiFi, BT and inertial-sensors measurements using combined localisation techniques including time-of-arrival (TOA) and dead-reckoning (DR). The first scheme is to synchronise and to define location of WAPs via outdoors-SPs’ fixed location/time information to help indoors localisation. SILS is to help locate any SP seamlessly as it goes from outdoors to indoors using measurements of GNSS, synched/located WAPs and BT-connectivity signals between groups of cooperated SPs in the vicinity. UNILS is to integrate onboard inertial-sensors’ readings into the SILS to provide seamless SPs positioning even in deep indoors, i.e. when the signals of WAPs or BT-anchors are considered not able to be used. Results, obtained from the OPNET simulations for various SPs network size and indoors/outdoors combinations scenarios, show that the schemes can provide seamless and locate indoors-SPs under 1 meter in near-indoors, 2-meters can be achieved when locating SPs at indoors (using SILS), while accuracy of around 3-meters can be achieved when locating SPs at various deep indoors situations without any constraint (using UNILS). The end of this thesis identifies possible future work to implement the proposed schemes on SPs and to achieve more accurate indoors SPs’ location

    Localisation and tracking of people using distributed UWB sensors

    Get PDF
    In vielen Überwachungs- und Rettungsszenarien ist die Lokalisierung und Verfolgung von Personen in InnenrĂ€umen auf nichtkooperative Weise erforderlich. FĂŒr die Erkennung von Objekten durch WĂ€nde in kurzer bis mittlerer Entfernung, ist die Ultrabreitband (UWB) Radartechnologie aufgrund ihrer hohen zeitlichen Auflösung und DurchdringungsfĂ€higkeit Erfolg versprechend. In dieser Arbeit wird ein Prozess vorgestellt, mit dem Personen in InnenrĂ€umen mittels UWB-Sensoren lokalisiert werden können. Er umfasst neben der Erfassung von Messdaten, AbstandschĂ€tzungen und dem Erkennen von Mehrfachzielen auch deren Ortung und Verfolgung. Aufgrund der schwachen Reflektion von Personen im Vergleich zum Rest der Umgebung, wird zur Personenerkennung zuerst eine Hintergrundsubtraktionsmethode verwendet. Danach wird eine konstante Falschalarmrate Methode zur Detektion und AbstandschĂ€tzung von Personen angewendet. FĂŒr Mehrfachziellokalisierung mit einem UWB-Sensor wird eine Assoziationsmethode entwickelt, um die SchĂ€tzungen des Zielabstandes den richtigen Zielen zuzuordnen. In Szenarien mit mehreren Zielen kann es vorkommen, dass ein nĂ€her zum Sensor positioniertes Ziel ein anderes abschattet. Ein Konzept fĂŒr ein verteiltes UWB-Sensornetzwerk wird vorgestellt, in dem sich das Sichtfeld des Systems durch die Verwendung mehrerer Sensoren mit unterschiedlichen Blickfeldern erweitert lĂ€sst. Hierbei wurde ein Prototyp entwickelt, der durch Fusion von Sensordaten die Verfolgung von Mehrfachzielen in Echtzeit ermöglicht. Dabei spielen insbesondere auch Synchronisierungs- und Kooperationsaspekte eine entscheidende Rolle. Sensordaten können durch Zeitversatz und systematische Fehler gestört sein. Falschmessungen und Rauschen in den Messungen beeinflussen die Genauigkeit der SchĂ€tzergebnisse. Weitere Erkenntnisse ĂŒber die ZielzustĂ€nde können durch die Nutzung zeitlicher Informationen gewonnen werden. Ein Mehrfachzielverfolgungssystem wird auf der Grundlage des Wahrscheinlichkeitshypothesenfilters (Probability Hypothesis Density Filter) entwickelt, und die Unterschiede in der Systemleistung werden bezĂŒglich der von den Sensoren ausgegebene Informationen, d.h. die Fusion von Ortungsinformationen und die Fusion von Abstandsinformationen, untersucht. Die Information, dass ein Ziel detektiert werden sollte, wenn es aufgrund von Abschattungen durch andere Ziele im Szenario nicht erkannt wurde, wird als dynamische Überdeckungswahrscheinlichkeit beschrieben. Die dynamische Überdeckungswahrscheinlichkeit wird in das Verfolgungssystem integriert, wodurch weniger Sensoren verwendet werden können, wĂ€hrend gleichzeitig die Performanz des SchĂ€tzers in diesem Szenario verbessert wird. Bei der Methodenauswahl und -entwicklung wurde die Anforderung einer Echtzeitanwendung bei unbekannten Szenarien berĂŒcksichtigt. Jeder untersuchte Aspekt der Mehrpersonenlokalisierung wurde im Rahmen dieser Arbeit mit Hilfe von Simulationen und Messungen in einer realistischen Umgebung mit UWB Sensoren verifiziert.Indoor localisation and tracking of people in non-cooperative manner is important in many surveillance and rescue applications. Ultra wideband (UWB) radar technology is promising for through-wall detection of objects in short to medium distances due to its high temporal resolution and penetration capability. This thesis tackles the problem of localisation of people in indoor scenarios using UWB sensors. It follows the process from measurement acquisition, multiple target detection and range estimation to multiple target localisation and tracking. Due to the weak reflection of people compared to the rest of the environment, a background subtraction method is initially used for the detection of people. Subsequently, a constant false alarm rate method is applied for detection and range estimation of multiple persons. For multiple target localisation using a single UWB sensor, an association method is developed to assign target range estimates to the correct targets. In the presence of multiple targets it can happen that targets closer to the sensor induce shadowing over the environment hindering the detection of other targets. A concept for a distributed UWB sensor network is presented aiming at extending the field of view of the system by using several sensors with different fields of view. A real-time operational prototype has been developed taking into consideration sensor cooperation and synchronisation aspects, as well as fusion of the information provided by all sensors. Sensor data may be erroneous due to sensor bias and time offset. Incorrect measurements and measurement noise influence the accuracy of the estimation results. Additional insight of the targets states can be gained by exploiting temporal information. A multiple person tracking framework is developed based on the probability hypothesis density filter, and the differences in system performance are highlighted with respect to the information provided by the sensors i.e. location information fusion vs range information fusion. The information that a target should have been detected when it is not due to shadowing induced by other targets is described as dynamic occlusion probability. The dynamic occlusion probability is incorporated into the tracking framework, allowing fewer sensors to be used while improving the tracker performance in the scenario. The method selection and development has taken into consideration real-time application requirements for unknown scenarios at every step. Each investigated aspect of multiple person localization within the scope of this thesis has been verified using simulations and measurements in a realistic environment using M-sequence UWB sensors

    Visual / acoustic detection and localisation in embedded systems

    Get PDF
    ©Cranfield UniversityThe continuous miniaturisation of sensing and processing technologies is increasingly offering a variety of embedded platforms, enabling the accomplishment of a broad range of tasks using such systems. Motivated by these advances, this thesis investigates embedded detection and localisation solutions using vision and acoustic sensors. Focus is particularly placed on surveillance applications using sensor networks. Existing vision-based detection solutions for embedded systems suffer from the sensitivity to environmental conditions. In the literature, there seems to be no algorithm able to simultaneously tackle all the challenges inherent to real-world videos. Regarding the acoustic modality, many research works have investigated acoustic source localisation solutions in distributed sensor networks. Nevertheless, it is still a challenging task to develop an ecient algorithm that deals with the experimental issues, to approach the performance required by these systems and to perform the data processing in a distributed and robust manner. The movement of scene objects is generally accompanied with sound emissions with features that vary from an environment to another. Therefore, considering the combination of the visual and acoustic modalities would offer a significant opportunity for improving the detection and/or localisation using the described platforms. In the light of the described framework, we investigate in the first part of the thesis the use of a cost-effective visual based method that can deal robustly with the issue of motion detection in static, dynamic and moving background conditions. For motion detection in static and dynamic backgrounds, we present the development and the performance analysis of a spatio- temporal form of the Gaussian mixture model. On the other hand, the problem of motion detection in moving backgrounds is addressed by accounting for registration errors in the captured images. By adopting a robust optimisation technique that takes into account the uncertainty about the visual measurements, we show that high detection accuracy can be achieved. In the second part of this thesis, we investigate solutions to the problem of acoustic source localisation using a trust region based optimisation technique. The proposed method shows an overall higher accuracy and convergence improvement compared to a linear-search based method. More importantly, we show that through characterising the errors in measurements, which is a common problem for such platforms, higher accuracy in the localisation can be attained. The last part of this work studies the different possibilities of combining visual and acoustic information in a distributed sensors network. In this context, we first propose to include the acoustic information in the visual model. The obtained new augmented model provides promising improvements in the detection and localisation processes. The second investigated solution consists in the fusion of the measurements coming from the different sensors. An evaluation of the accuracy of localisation and tracking using a centralised/decentralised architecture is conducted in various scenarios and experimental conditions. Results have shown the capability of this fusion approach to yield higher accuracy in the localisation and tracking of an active acoustic source than by using a single type of data

    Indoor Localisation of Scooters from Ubiquitous Cost-Effective Sensors: Combining Wi-Fi, Smartphone and Wheel Encoders

    Get PDF
    Indoor localisation of people and objects has been a focus of research studies for several decades because of its great advantage to several applications. Accuracy has always been a challenge because of the uncertainty of the employed sensors. Several technologies have been proposed and researched, however, accuracy still represents an issue. Today, several sensor technologies can be found in indoor environments, some of which are economical and powerful, such as Wi-Fi. Meanwhile, Smartphones are typically present indoors because of the people that carry them along, while moving about within rooms and buildings. Furthermore, vehicles such as mobility scooters can also be present indoor to support people with mobility impairments, which may be equipped with low-cost sensors, such as wheel encoders. This thesis investigates the localisation of mobility scooters operating indoor. This represents a specific topic as most of today's indoor localisation systems are for pedestrians. Furthermore, accurate indoor localisation of those scooters is challenging because of the type of motion and specific behaviour. The thesis focuses on improving localisation accuracy for mobility scooters and on the use of already available indoor sensors. It proposes a combined use of Wi-Fi, Smartphone IMU and wheel encoders, which represents a cost-effective energy-efficient solution. A method has been devised and a system has been developed, which has been experimented on different environment settings. The outcome of the experiments are presented and carefully analysed in the thesis. The outcome of several trials demonstrates the potential of the proposed solutions in reducing positional errors significantly when compared to the state-of-the-art in the same area. The proposed combination demonstrated an error range of 0.35m - 1.35m, which can be acceptable in several applications, such as some related to assisted living. 3 As the proposed system capitalizes on the use of ubiquitous technologies, it opens up to a potential quick take up from the market, therefore being of great benefit for the target audience

    Localisation in wireless sensor networks for disaster recovery and rescuing in built environments

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyProgress in micro-electromechanical systems (MEMS) and radio frequency (RF) technology has fostered the development of wireless sensor networks (WSNs). Different from traditional networks, WSNs are data-centric, self-configuring and self-healing. Although WSNs have been successfully applied in built environments (e.g. security and services in smart homes), their applications and benefits have not been fully explored in areas such as disaster recovery and rescuing. There are issues related to self-localisation as well as practical constraints to be taken into account. The current state-of-the art communication technologies used in disaster scenarios are challenged by various limitations (e.g. the uncertainty of RSS). Localisation in WSNs (location sensing) is a challenging problem, especially in disaster environments and there is a need for technological developments in order to cater to disaster conditions. This research seeks to design and develop novel localisation algorithms using WSNs to overcome the limitations in existing techniques. A novel probabilistic fuzzy logic based range-free localisation algorithm (PFRL) is devised to solve localisation problems for WSNs. Simulation results show that the proposed algorithm performs better than other range free localisation algorithms (namely DVhop localisation, Centroid localisation and Amorphous localisation) in terms of localisation accuracy by 15-30% with various numbers of anchors and degrees of radio propagation irregularity. In disaster scenarios, for example, if WSNs are applied to sense fire hazards in building, wireless sensor nodes will be equipped on different floors. To this end, PFRL has been extended to solve sensor localisation problems in 3D space. Computational results show that the 3D localisation algorithm provides better localisation accuracy when varying the system parameters with different communication/deployment models. PFRL is further developed by applying dynamic distance measurement updates among the moving sensors in a disaster environment. Simulation results indicate that the new method scales very well

    Autonomous, Collaborative, Unmanned Aerial Vehicles for Search and Rescue

    Get PDF
    Search and Rescue is a vitally important subject, and one which can be improved through the use of modern technology. This work presents a number of advances aimed towards the creation of a swarm of autonomous, collaborative, unmanned aerial vehicles for land-based search and rescue. The main advances are the development of a diffusion based search strategy for route planning, research into GPS (including the Durham Tracker Project and statistical research into altitude errors), and the creation of a relative positioning system (including discussion of the errors caused by fast-moving units). Overviews are also given of the current state of research into both UAVs and Search and Rescue

    Robust dense visual SLAM using sensor fusion and motion segmentation

    Get PDF
    Visual simultaneous localisation and mapping (SLAM) is an important technique for enabling mobile robots to navigate autonomously within their environments. Using cameras, robots reconstruct a representation of their environment and simultaneously localise themselves within it. A dense visual SLAM system produces a high-resolution and detailed reconstruction of the environment which can be used for obstacle avoidance or semantic reasoning. State-of-the-art dense visual SLAM systems demonstrate robust performance and impressive accuracy in ideal conditions. However, these techniques are based on requirements which limit the extent to which they can be deployed in real applications. Fundamentally, they require constant scene illumination, smooth camera motion and no moving objects being present in the scene. Overcoming these requirements is not trivial and significant effort is needed to make dense visual SLAM approaches more robust to real-world conditions. The objective of this thesis is to develop dense visual SLAM systems which are more robust to real-world visually challenging conditions. For this, we leverage sensor fusion and motion segmentation for situations where camera data is unsuitable. The first contribution is a visual SLAM system for the NASA Valkyrie humanoid robot which is robust to the robot’s operation. It is based on a sensor fusion approach which combines visual SLAM and leg odometry to demonstrate increased robustness to illumination changes and fast camera motion. Second, we research methods for robust visual odometry in the presence of moving objects. We propose a formulation for joint visual odometry and motion segmentation that demonstrates increased robustness in scenes with moving objects compared to state-of-the-art approaches. We then extend this method using inertial information from a gyroscope to compare the contributions of motion segmentation and motion prior integration for robustness to scene dynamics. As part of this study we provide a dataset recorded in scenes with different numbers of moving objects. In conclusion, we find that both motion segmentation and motion prior integration are necessary for achieving significantly better results in real-world conditions. While motion priors increase robustness, motion segmentation increases the accuracy of the reconstruction results through filtering of moving objects.Edinburgh Centre for RoboticsEngineering and Physical Sciences Research Council (EPSRC
    • 

    corecore