7 research outputs found

    Estimating Movement from Mobile Telephony Data

    Get PDF
    Mobile enabled devices are ubiquitous in modern society. The information gathered by their normal service operations has become one of the primary data sources used in the understanding of human mobility, social connection and information transfer. This thesis investigates techniques that can extract useful information from anonymised call detail records (CDR). CDR consist of mobile subscriber data related to people in connection with the network operators, the nature of their communication activity (voice, SMS, data, etc.), duration of the activity and starting time of the activity and servicing cell identification numbers of both the sender and the receiver when available. The main contributions of the research are a methodology for distance measurements which enables the identification of mobile subscriber travel paths and a methodology for population density estimation based on significant mobile subscriber regions of interest. In addition, insights are given into how a mobile network operator may use geographically located subscriber data to create new revenue streams and improved network performance. A range of novel algorithms and techniques underpin the development of these methodologies. These include, among others, techniques for CDR feature extraction, data visualisation and CDR data cleansing. The primary data source used in this body of work was the CDR of Meteor, a mobile network operator in the Republic of Ireland. The Meteor network under investigation has just over 1 million customers, which represents approximately a quarter of the country’s 4.6 million inhabitants, and operates using both 2G and 3G cellular telephony technologies. Results show that the steady state vector analysis of modified Markov chain mobility models can return population density estimates comparable to population estimates obtained through a census. Evaluated using a test dataset, results of travel path identification showed that developed distance measurements achieved greater accuracy when classifying the routes CDR journey trajectories took compared to traditional trajectory distance measurements. Results from subscriber segmentation indicate that subscribers who have perceived similar relationships to geographical features can be grouped based on weighted steady state mobility vectors. Overall, this thesis proposes novel algorithms and techniques for the estimation of movement from mobile telephony data addressing practical issues related to sampling, privacy and spatial uncertainty

    Data Analysis and Memory Methods for RSS Bluetooth Low Energy Indoor Positioning

    Get PDF
    The thesis aims at finding a feasible solution to Bluetooth low energy indoor positioning (BLE-IP) including comprehensive data analysis of the received signal strength indication (RSSI) values. The data analysis of RSSI values was done to understand different factors influencing the RSSI values so as to gain better understanding of data generating process and to improve the data model. The positioning task is accomplished using a methodology called \textit{fingerprinting}. The fingerprinting based positioning involves two phases namely \textit{calibration phase} and \textit{localization phase}. The localization phase utilises the memory methods for positioning. In this thesis, we have used \textit{Gaussian process} for generation of radio maps and for localization we focus on memory methods: \textit{particle filters} and \textit{unscented Kalman filters}. The Gaussian process radio map is used as the measurement model in the Bayesian filtering context. The optimal fingerprinting phase parameters were determined and the filtering methods were evaluated in terms root mean square error

    Cloud-based Indoor Positioning Platform for Context-adaptivity in GNSS-denied Scenarios

    Get PDF
    The demand for positioning, localisation and navigation services is on the rise, largely owing to the fact that such services form an integral part of applications in areas such as human activity recognition, robotics, and eHealth. Depending on the field of application, these services must accomplish high levels of accuracy, massive device connectivity, real-time response, flexibility, and integrability. Although many current solutions have succeeded in fulfilling these requirements, numerous challenges remain in terms of providing robust and reliable indoor positioning solutions. This dissertation has a core focus on improving computing efficiency, data pre-processing, and software architecture for Indoor Positioning Systems (IPSs), without throwing out position and location accuracy. Fingerprinting is the main positioning technique used in this dissertation, as it is one of the approaches used most frequently in indoor positioning solutions. The dissertation begins by presenting a systematic review of current cloud-based indoor positioning solutions for Global Navigation Satellite System (GNSS) denied scenarios. This first contribution identifies the current challenges and trends in indoor positioning applications over the last seven years (from January 2015 to May 2022). Secondly, we focus on the study of data optimisation techniques such as data cleansing and data augmentation. This second contribution is devoted to reducing the number of outliers fingerprints in radio maps and, therefore, reducing the error in position estimation. The data cleansing algorithm relies on the correlation between fingerprints, taking into account the maximum Received Signal Strength (RSS) values, whereas the Generative Adversarial Network (GAN) network is used for data augmentation in order to generate synthetic fingerprints that are barely distinguishable from real ones. Consequently, the positioning error is reduced by more than 3.5% after applying the data cleansing. Similarly, the positioning error is reduced in 8 from 11 datasets after generating new synthetic fingerprints. The third contribution suggests two algorithms which group similar fingerprints into clusters. To that end, a new post-processing algorithm for Density-based Spatial Clustering of Applications with Noise (DBSCAN) clustering is developed to redistribute noisy fingerprints to the formed clusters, enhancing the mean positioning accuracy by more than 20% in comparison with the plain DBSCAN. A new lightweight clustering algorithm is also introduced, which joins similar fingerprints based on the maximum RSS values and Access Point (AP) identifiers. This new clustering algorithm reduces the time required to form the clusters by more than 60% compared with two traditional clustering algorithms. The fourth contribution explores the use of Machine Learning (ML) models to enhance the accuracy of position estimation. These models are based on Deep Neural Network (DNN) and Extreme Learning Machine (ELM). The first combines Convolutional Neural Network (CNN) and Long short-term memory (LSTM) to learn the complex patterns in fingerprinting radio maps and improve position accuracy. The second model uses CNN and ELM to provide a fast and accurate solution for the classification of fingerprints into buildings and floors. Both models offer better performance in terms of floor hit rate than the baseline (more than 8% on average), and also outperform some machine learning models from the literature. Finally, this dissertation summarises the key findings of the previous chapters in an open-source cloud platform for indoor positioning. This software developed in this dissertation follows the guidelines provided by current standards in positioning, mapping, and software architecture to provide a reliable and scalable system

    Investigation of Vehicle-to-Everything (V2X) Communication for Autonomous Control of Connected Vehicles

    Get PDF
    Autonomous Driving Vehicles (ADVs) has received considerable attention in recent years by academia and industry, bringing about a paradigm shift in Intelligent Transportation Systems (ITS), where vehicles operate in close proximity through wireless communication. It is envisioned as a promising technology for realising efficient and intelligent transportation systems, with potential applications for civilian and military purposes. Vehicular network management for ADVs is challenging as it demands mobility, location awareness, high reliability, and low latency data traffic. This research aims to develop and implement vehicular communication in conjunction with a driving algorithm for ADVs feedback control system with a specific focus on the safe displacement of vehicle platoon while sensing the surrounding environment, such as detecting road signs and communicate with other road users such as pedestrian, motorbikes, non-motorised vehicles and infrastructure. However, in order to do so, one must investigate crucial aspects related to the available technology, such as driving behaviour, low latency communication requirement, communication standards, and the reliability of such a mechanism to decrease the number of traffic accidents and casualties significantly. To understand the behaviour of wireless communication compared to the theoretical data rates, throughput, and roaming behaviour in a congested indoor line-of-sight heterogeneous environment, we first carried out an experimental study for IEEE 802.11a, 802.11n and 802.11ac standards in a 5 GHz frequency spectrum. We validated the results with an analytical path loss model as it is essential to understand how the client device roams or decides to roam from one Access Point to another and vice-versa. We observed seamless roaming between the tested protocols irrespective of their operational environment (indoor or outdoor); their throughput efficiency and data rate were also improved by 8-12% when configured with Short Guard Interval (SGI) of 400ns compared to the theoretical specification of the tested protocols. Moreover, we also investigated the Software-Defined Networking (SDN) for vehicular communication and compared it with the traditional network, which is generally incorporated vertically where control and data planes are bundled collectively. The SDN helped gain more flexibility to support multiple core networks for vehicular communication and tackle the potential challenges of network scalability for vehicular applications raised by the ADVs. In particular, we demonstrate that the SDN improves throughput efficiency by 4% compared to the traditional network while ensuring efficient bandwidth and resource management. Finally, we proposed a novel data-driven coordination model which incorporates Vehicle-to-Everything (V2X) communication and Intelligent Driver Model (IDM), together called V2X Enabled Intelligent Driver Model (VX-IDM). Our model incorporates a Car-Following Model (CFM), i.e., IDM, to model a vehicle platoon in an urban and highway traffic scenario while ensuring the vehicle platoon's safety with the integration of IEEE 802.11p Vehicle-to-Infrastructure (V2I) communication scheme. The model integrates the 802.11p V2I communication channel with the IDM in MATLAB using ODE‐45 and utilises the 802.11p simulation toolbox for configuring vehicular channels. To demonstrate model functionality in urban and highway traffic environments, we developed six case studies. We also addressed the heterogeneity issue of wireless networks to improve the overall network reliability and efficiency by estimating the Signal-to-Noise Ratio (SNR) parameters for the platoon vehicle's displacement and location on the road from Road-Side-Units (RSUs). The simulation results showed that inter-vehicle spacing could be steadily maintained at a minimum safe value at all the time. Moreover, the model has a fault-tolerant mechanism that works even when communication with infrastructure is interrupted or unavailable, making the VX-IDM model collision-free

    Efficient Modelling and Simulation Methodology for the Design of Heterogeneous Mixed-Signal Systems on Chip

    Get PDF
    Systems on Chip (SoCs) and Systems in Package (SiPs) are key parts of a continuously broadening range of products, from chip cards and mobile phones to cars. Besides an increasing amount of digital hardware and software for data processing and storage, they integrate more and more analogue/RF circuits, sensors, and actuators to interact with their (analogue) environment. This trend towards more complex and heterogeneous systems with more intertwined functionalities is made possible by the continuous advances in the manufacturing technologies and pushed by market demand for new products and product variants. Therefore, the reuse and retargeting of existing component designs becomes more and more important. However, all these factors make the design process increasingly complex and multidisciplinary. Nowadays, the design of the individual components is usually well understood and optimised through the usage of a diversity of CAD/EDA tools, design languages, and data formats. These are based on applying specific modelling/abstraction concepts, description formalisms (also called Models of Computation (MoCs)) and analysis/simulation methods. The designer has to bridge the gaps between tools and methodologies using manual conversion of models and proprietary tool couplings/integrations, which is error-prone and time-consuming. A common design methodology and platform to manage, exchange, and collaboratively develop models of different formats and of different levels of abstraction is missing. The verification of the overall system is a big problem, as it requires the availability of compatible models for each component at the right level of abstraction to achieve satisfying results with respect to the system functionality and test coverage, but at the same time acceptable simulation performance in terms of accuracy and speed. Thus, the big challenge is the parallel integration of these very different part design processes. Therefore, the designers need a common design and simulation platform to create and refine an executable specification of the overall system (a virtual prototype) on a high level of abstraction, which supports different MoCs. This makes possible the exploration of different architecture options, estimation of the performance, validation of re-used parts, verification of the interfaces between heterogeneous components and interoperability with other systems as well as the assessment of the impacts of the future working environment and the manufacturing technologies used to realise the system. For embedded Analogue and Mixed-Signal (AMS) systems, the C++-based SystemC with its AMS extensions, to which recent standardisation the author contributed, is currently establishing itself as such a platform. This thesis describes the author's contribution to solve the modelling and simulation challenges mentioned above in three thematic phases. In the first phase, the prototype of a web-based platform to collect models from different domains and levels of abstraction together with their associated structural and semantical meta information has been developed and is called ModelLib. This work included the implementation of a hierarchical access control mechanism, which is able to protect the Intellectual Property (IP) constituted by the model at different levels of detail. The use cases developed for this tool show how it can support the AMS SoC design process by fostering the reuse and collaborative development of models for tasks like architecture exploration, system validation, and creation of more and more elaborated models of the system. The experiences from the ModelLib development delivered insight into which aspects need to be especially addressed throughout the development of models to make them reusable: mainly flexibility, documentation, and validation. This was the starting point for the development of an efficient modelling methodology for the top-down design and bottom-up verification of RF Systems based on the systematic usage of behavioural models in the second phase. One outcome is the developed library of well documented, parameterisable, and pin-accurate VHDL-AMS models of typical analogue/digital/RF components of a transceiver. The models offer the designer two sets of parameters: one based on the performance specifications and one based on the device parameters back-annotated from the transistor-level implementation. The abstraction level used for the description of the respective analogue/digital/RF component behaviour has been chosen to achieve a good trade-off between accuracy, fidelity, and simulation performance. The pin-accurate model interfaces facilitate the integration of transistor-level models for the validation of the behavioural models or the verification of a component implementation in the system context. These properties make the models suitable for different design tasks such as architecture exploration or overall system validation. This is demonstrated on a model of a binary Frequency-Shift Keying (FSK) transmitter parameterised to meet very different target specifications. This project showed also the limits in terms of abstraction and simulation performance of the "classical" AMS Hardware Description Languages (HDLs). Therefore, the third and last phase was dedicated to further raise the abstraction level for the description of complex and heterogeneous AMS SoCs and thus enable their efficient simulation using different synchronised MoCs. This work uses the C++-based simulation framework SystemC with its AMS extensions. New modelling capabilities going beyond the standardised SystemC AMS extensions have been introduced to describe energy conserving multi-domain systems in a formal and consistent way at a high level of abstraction. To this end, all constants, variables, and parameters of the system model, which represent a physical quantity, can now declare their dimension and associated system of units as an intrinsic part of their data type. Assignments to them need to contain besides the value also the correct measurement unit. This allows a much more precise but still compact definition of the models' interfaces and equations. Thus, the C++ compiler can check the correct assembly of the components and the coherency of the equations by means of dimensional analysis. The implementation is based on the Boost.Units library, which employs template metaprogramming techniques. A dedicated filter for the measurement units data types has been implemented to simplify the compiler messages and thus facilitate the localisation of unit errors. To ensure the reusability of models despite precisely defined interfaces, their interfaces and behaviours need to be parametrisable in a well-defined manner. The enabling implementation techniques for this have been demonstrated with the developed library of generic block diagram component models for the Timed Data Flow (TDF) MoC of the SystemC AMS extensions. These techniques are also the key to integrate a new MoC based on the bond graph formalism into the SystemC AMS extensions. Bond graphs facilitate the unified description of the energy conserving parts of heterogeneous systems with the help of a small set of modelling primitives parametrisable to the physical domain. The resulting models have a simulation performance comparable to an equivalent signal flow model
    corecore