8 research outputs found

    Analyses of location-based services in Africa and investigating methods of improving its accuracy

    Get PDF
    The subject area of this thesis analyses the provision of location-based services (LBS) in Africa and seeks methods of improving their positional accuracy. The motivation behind this work is based on the fact that mobile technology is the only modern form of information and communication technology available to most people in Africa. Therefore all services that can be offered on the mobile network should be harnessed and LBS are one of these services. This research work is novel and is the first critical analysis carried out on LBS in Africa; therefore it had to be carried out in phases. A study was first carried out to analyse the provision of LBS in Africa. It was discovered that Africa definitely lags much of the World in the provision of LBS to its mobile subscribers; only a few LBS are available and these are not adapted to the needs of the African people. A field data empirical investigation was carried out in South Africa to evaluate the performance of LBS provided. Data collected indicated that the LBS provided is not dependable due to the inaccuracy introduced by two major factors - the positioning method and the data content provided. Analyzing methods to improve the positional accuracy proved quite challenging because Africa being one of the poorest continents has most mobile subscribers using basic mobile phones. Consequently, LBS often cannot be provided in Africa based on the capability of the mobile phones but rather on the capability of the mobile operator’s infrastructure. However, provision of LBS using the network-based positioning technologies poses the challenge of dynamically varying error sources which affects its accuracy. The effect of some error sources on network-based positioning technologies were analysed and a model developed to investigate the feasibility of making the RSS-based geometric positioning technologies error aware. Major consideration is given to the geometry of the BSs whose measurements are used for position estimation. Results indicated that it is feasible to improve location information in Africa not just by improving the positioning algorithms but also by using improved prediction algorithms, incorporating up-to-date geographical information and hybrid technologies. It was also confirmed that although errors are introduced due to location estimation methods, it is impossible to model the error and make it applicable for all algorithms and all location estimations. This is because the errors are dynamically varying and unpredictable for every measurement

    Véhicules connectés contributions à la communication véhicule-réseau mobile et la localisation coopérative

    Get PDF
    RÉSUMÉ VĂ©hicules connectĂ©s, ou « connected vehicles », est un nouveau paradigme des systĂšmes de transport intelligents (STI) qui vise Ă  amĂ©liorer la sĂ©curitĂ© et l’efficacitĂ© du trafic routier en utilisant les communications sans fil. Les communications des vĂ©hicules connectĂ©s (ou, V2X) englobent les communications sans fil entre vĂ©hicules et infrastructures (V2I), entre vĂ©hicule et vĂ©hicule (V2V), et entre les vĂ©hicules et les dispositifs sans fil (V2D). ConsidĂ©rĂ© comme la norme de facto pour les communications V2X, le DSRC/WAVE est le principal standard de communication sans fil spĂ©cifiquement conçu pour les communications vĂ©hiculaires. L’efficacitĂ© du DSRC/WAVE pour les communications V2V et V2I a Ă©tĂ© prouvĂ©e par de nombreuses Ă©tudes et bancs d'essai dans le monde rĂ©el. En ce qui concerne la communication V2V, le passage au stade de dĂ©ploiement Ă  grande Ă©chelle est prĂ©vu Ă  l’horizon 2020. En ce qui a trait Ă  la communication V2I, bien que le dĂ©ploiement d’une infrastructure DSRC (RSU) soit critique pour plusieurs applications STI, il n'y a toujours pas de plan pour son dĂ©ploiement Ă  grande Ă©chelle, essentiellement en raison de la nĂ©cessitĂ© d'investissements publics considĂ©rables. Avec les progrĂšs rĂ©alisĂ©s au niveau des derniĂšres versions du rĂ©seau mobile 4G LTE-A, les rĂ©seaux mobiles Ă©mergent comme l'une des principales technologies pour les communications V2I. En effet, le rĂ©seau mobile LTE-A permet aujourd’hui un plus grand dĂ©bit (100Mb/s - 1Gb/s) avec relativement un faible dĂ©lai (10ms) grĂące Ă  une Ă©volution au niveau de l’architecture du rĂ©seau et l’introduction de nouvelles technologies telles que la densification du rĂ©seau Ă  l'aide de petites cellules, relais (fixes et mobiles), la connectivitĂ© double (Dual Connectivity, DC), l’agrĂ©gation de porteuses (Carrier Aggregation, CA), etc.; des Ă©volutions qui ouvrent la voie vers la 5G Ă  l’horizon 2020 avec la promesse d’un dĂ©bit encore plus Ă©levĂ© (10Gb/s) et d’un plus faible dĂ©lai (1ms), ce qui renforce ainsi la tendance pour une future intĂ©gration vĂ©hicule et rĂ©seau mobile. Afin d’atteindre une plus grande efficacitĂ© spectrale, les petites cellules sont largement adoptĂ©es par les opĂ©rateurs de rĂ©seaux mobiles, dans les rĂ©seaux dits hĂ©tĂ©rogĂšnes (HetNets), comme une solution clĂ© pour dĂ©sengorger le trafic au niveau des macrocellules et amĂ©liorer la capacitĂ© et la couverture du rĂ©seau d’accĂšs. Cependant, bien que l’utilisation des petites cellules soit une solution intĂ©ressante pour les communications V2I, Ă©tant donnĂ© leur faible portĂ©e, cela provoque des relĂšves frĂ©quentes qui mĂšnent Ă  une surcharge Ă©levĂ©e de signalisation vers le rĂ©seau cƓur. De plus, Ă©tant donnĂ© que les petites cellules sont gĂ©nĂ©ralement connectĂ©es au rĂ©seau cƓur via une connexion Internet, celle-ci devient le goulot d'Ă©tranglement pour la relĂšve et le transfert de donnĂ©es. D’oĂč l’importance de complĂ©ter un maximum de relĂšve localement. Cette thĂšse s'inscrit dans le cadre de l’étude de l’intĂ©gration vĂ©hicule infrastructure. L'objectif gĂ©nĂ©ral est de proposer une architecture pour vĂ©hicule connectĂ© basĂ©e sur la localisation coopĂ©rative, la communication V2I et la gestion de relĂšve pour une meilleure intĂ©gration VANET – rĂ©seau mobile. Cette thĂšse fait Ă©tat de trois principales contributions. 1. La premiĂšre contribution concerne la proposition d’algorithmes de localisation coopĂ©rative, basĂ©s sur une approche ensembliste, qui permettent d’amĂ©liorer la prĂ©cision de localisation. Le premier algorithme appelĂ© (CLES) est un algorithme gĂ©nĂ©rique pour la localisation coopĂ©rative basĂ©e sur une approche ensembliste. Le deuxiĂšme algorithme, appelĂ© (CLEF), est une application de l’algorithme CLES Ă  la localisation par approche des signatures. De plus, nous caractĂ©risons leur prĂ©cision en Ă©valuant la rĂ©duction du diamĂštre maximal et l’aire du polygone en fonction de diffĂ©rents paramĂštres tels que le nombre de polygones, la configuration gĂ©omĂ©trique, la proximitĂ© du nƓud par rapport Ă  la frontiĂšre de son polygone, et l’incertitude sur les mesures de distances. 2. La deuxiĂšme contribution porte sur la sĂ©lection des passerelles mobiles pour connecter efficacement les vĂ©hicules aux petites cellules du rĂ©seau mobile. Nous formulons le problĂšme de sĂ©lection des passerelles mobiles sous forme d’un problĂšme de programmation linĂ©aire binaire multi-objectif (MO-BIP). Ensuite, nous Ă©valuons l’efficacitĂ© de l’algorithme au niveau du temps de calcul pour diffĂ©rents degrĂ©s de connectivitĂ© et un nombre variable de vĂ©hicules. 3. La troisiĂšme contribution concerne la gestion de la relĂšve dans les petites cellules du rĂ©seau mobile (LTE-A) afin de supporter efficacement les communications des vĂ©hicules connectĂ©s et rĂ©duire la surcharge de signalisation vers le rĂ©seau cƓur. Pour ce faire, nous proposons un nouveau schĂ©ma basĂ© sur le transfert local de trafic en utilisant les liens X2 et les nƓuds d’ancrage. Trois procĂ©dures sont proposĂ©es, Ă  savoir: 1) intra-domaine, 2) inter-domaines, 3) et K-sauts inter-domaines. Ensuite, en utilisant un modĂšle analytique, nous Ă©valuons l’efficacitĂ© du schĂ©ma proposĂ©.----------ABSTRACT Connected Vehicles are a new intelligent transportation paradigm that uses wireless communications to improve traffic safety and efficiency. It has received a great deal of attention in recent years, across many communities. While the DSRC is widely recognized as the de facto standard for V2V, other wireless technologies are required for large-scale deployment of V2I communications. Thanks to its high data rates and large scale deployment, the LTE-A enhanced by small cells densification, is positioned as one of the major candidate technologies for V2I communications. However, using LTE-A small cells for V2I communications is challenging due to their small coverage which lead to frequent handoffs and more signaling overhead. Thanks to recent advances in LTE-A Releases-10/11/12, the 4G LTE-Advanced (LTE-A) mobile network appears as one of the major candidate technologies for V2I communications. In fact, the LTE-A promises to deliver reduced connection setup time and lower latency (10ms) and higher data rates (up to 1Gbps) by using new physical layer technologies and new network elements and functions such as, network densification using Small Cells (SCs), Dual Connectivity (DC), Relaying functionality, Carrier Aggregation (CA), Device to Device (D2D) communication, etc.; developments that pave the way to 5G in the Horizon 2020, with the promise of an even higher data rates (more than 10 Gbps) and even much lower latency (1ms), which reinforces the trend for future integration between VANET and mobile networks for V2I communications. Although the macrocell will remain the major Radio Access Network (RAN) element for wide-area coverage and high-mobility users, it is no longer sufficient to meet user's demand in many high-density areas. Indeed, due to the proliferation of mobile devices and applications, mobile data demand continues to grow exponentially. Small cells, which include microcells, picocells, and femtocells, are widely recognized as a key solution for enhancing RAN capacity and coverage. They are increasingly used by mobile operators, in the so-called Heterogeneous Network (HetNet), to offload traffic from their macrocells. A HetNet is typically composed of several layers (macrocells, small cells), and in some cases different access technologies (e.g., LTE-A, UMTS, WiFi). SCs densification involves deploying more small coverage base stations in high demand areas to bring higher spectral efficiency per coverage area. Nevertheless, the SCs deployment faces a number of problems relevant to mobility handling that have to be addressed. More specifically, the use of SCs with limited coverage causes frequent handovers that lead to high signaling overhead toward the core network. In addition, since the small cells are generally connected to the EPC via a network Internet connection, this one becomes the bottleneck for handovers and data forwarding, hence the importance of completing a maximum of handover locally. This thesis therefore aims to propose solutions for VANETs and mobile networks integration. The main contributions of this thesis are summarized as follows: The first contribution concerns the proposed cooperative localization algorithms, based on a set-membership approach which improves the location accuracy. The first algorithm called (CLES) is a generic algorithm for cooperative localization based on a set-membership approach. The second algorithm called (CLEF) is an application of CLES algorithm to fingerprinting localization. In addition, we characterize their accuracy by evaluating the reduction of the maximum diameter and the area of the polygon depending on various parameters such as the number of polygons, the geometric configuration, the nearest node in relation to the boundary of the polygon, and the uncertainty of distance measurements. The second contribution concerns the selection of mobile gateways to effectively connect vehicles to small cells of the mobile network. In fact, while each vehicle may directly uses its LTE-A interface for V2I communications, we argue that by selecting a limited number of GWs, we can effectively reduce the mobility signaling overhead. Hence, we propose a new network-based mobile gateway selection scheme with one-hop clustering to efficiently relay the traffic from neighbouring vehicles toward the serving SC. The selection problem is formulated as a multi-objective binary linear programming problem. Using linear programming solver, we show that, for realistic number of vehicles per small cell and GW connectivity degree, the execution time is relatively short. As a third contribution of this thesis, we focus on challenges relevant to mobility for VANETs using LTE-A network. Specifically, a novel architecture that integrates VANET and 4G LTE-A Heterogeneous Network for enhanced mobility in LTE-A small cells is introduced. First, we propose a new network-based mobile gateway selection scheme with one-hop clustering to efficiently relay traffic from neighbouring vehicles toward the serving SC. The problem is formulated as a multi-objective binary programming problem. Then, for seamless mobility of connected vehicles, we propose a local k-hops anchor-based mobility scheme with three procedures, namely intra-domain, k-hops inter-domain and inter-domain procedures. Numerical results show the effectiveness of the proposed mobility schemes for reducing the generated signaling load towards the core network

    Algorithms for Positioning with Nonlinear Measurement Models and Heavy-tailed and Asymmetric Distributed Additive Noise

    Get PDF
    Determining the unknown position of a user equipment using measurements obtained from transmitters with known locations generally results in a nonlinear measurement function. The measurement errors can have a heavy-tailed and/ or skewed distribution, and the likelihood function can be multimodal.A positioning problem with a nonlinear measurement function is often solved by a nonlinear least squares (NLS) method or, when ïŹltering is desired, by an extended Kalman ïŹlter (EKF). However, these methods are unable to capture multiple peaks of the likelihood function and do not address heavy-tailedness or skewness. Approximating the likelihood by a Gaussian mixture (GM) and using a GM ïŹlter (GMF) solves the problem. The drawback is that the approximation requires a large number of components in the GM for a precise approximation, which makes it unsuitable for real-time positioning on small mobile devices.This thesis studies a generalised version of Gaussian mixtures, which is called GGM, to capture multiple peaks. It relaxes the GM’s restriction to non-negative component weights. The analysis shows that the GGM allows a signiïŹcant reduction of the number of required Gaussian components when applied for approximating the measurement likelihood of a transmitter with an isotropic antenna, compared with the GM. Therefore, the GGM facilitates real-time positioning in small mobile devices. In tests for a cellular telephone network and for an ultra-wideband network the GGM and its ïŹlter provide signiïŹcantly better positioning accuracy than the NLS and the EKF.For positioning with nonlinear measurement models, and heavytailed and skewed distributed measurement errors, an Expectation Maximisation (EM) algorithm is studied. The EM algorithm is compared with a standard NLS algorithm in simulations and tests with realistic emulated data from a long term evolution network. The EM algorithm is more robust to measurement outliers. If the errors in training and positioning data are similar distributed, then the EM algorithm yields signiïŹcantly better position estimates than the NLS method. The improvement in accuracy and precision comes at the cost of moderately higher computational demand and higher vulnerability to changing patterns in the error distribution (of training and positioning data). This vulnerability is caused by the fact that the skew-t distribution (used in EM) has 4 parameters while the normal distribution (used in NLS) has only 2. Hence the skew-t yields a closer ïŹt than the normal distribution of the pattern in the training data. However, on the downside if patterns in training and positioning data vary than the skew-t ïŹt is not necessarily a better ïŹt than the normal ïŹt, which weakens the EM algorithm’s positioning accuracy and precision. This concept of reduced generalisability due to overïŹtting is a basic rule of machine learning.This thesis additionally shows how parameters of heavy-tailed and skewed error distributions can be ïŹtted to training data. It furthermore gives an overview on other parametric methods for solving the positioning method, how training data is handled and summarised for them, how positioning is done by them, and how they compare with nonparametric methods. These methods are analysed by extensive tests in a wireless area network, which shows the strength and weaknesses of each method

    Pedestrian Mobility Mining with Movement Patterns

    Get PDF
    In street-based mobility mining, pedestrian volume estimation receives increasing attention, as it provides important applications such as billboard evaluation, attraction ranking and emergency support systems. In practice, empirical measurements are sparse due to budget limitations and constrained mounting options. Therefore, estimation of pedestrian quantity is required to perform pedestrian mobility analysis at unobserved locations. Accurate pedestrian mobility analysis is difficult to achieve due to the non-random path selection of individual pedestrians (resulting from motivated movement behaviour), causing the pedestrian volumes to distribute non-uniformly among the traffic network. Existing approaches (pedestrian simulations and data mining methods) are hard to adjust to sensor measurements or require more expensive input data (e.g. high fidelity floor plans or total number of pedestrians in the site) and are thus unfeasible. In order to achieve a mobility model that encodes pedestrian volumes accurately, we propose two methods under the regression framework which overcome the limitations of existing methods. Namely, these two methods incorporate not just topological information and episodic sensor readings, but also prior knowledge on movement preferences and movement patterns. The first one is based on Least Squares Regression (LSR). The advantage of this method is the easy inclusion of route choice heuristics and robustness towards contradicting measurements. The second method is Gaussian Process Regression (GPR). The advantages of this method are the possibilities to include expert knowledge on pedestrian movement and to estimate the uncertainty in predicting the unknown frequencies. Furthermore the kernel matrix of the pedestrian frequencies returned by the method supports sensor placement decisions. Major benefits of the regression approach are (1) seamless integration of expert data and (2) simple reproduction of sensor measurements. Further advantages are (3) invariance of the results against traffic network homeomorphism and (4) the computational complexity depends not on the number of modeled pedestrians but on the traffic network complexity. We compare our novel approaches to state-of-the-art pedestrian simulation (Generalized Centrifugal Force Model) as well as existing Data Mining methods for traffic volume estimation (Spatial k-Nearest Neighbour) and commonly used graph kernels for the Gaussian Process Regression (Squared Exponential, Regularized Laplacian and Diffusion Kernel) in terms of prediction performance (measured with mean absolute error). Our methods showed significantly lower error rates. Since pattern knowledge is not easy to obtain, we present algorithms for pattern acquisition and analysis from Episodic Movement Data. The proposed analysis of Episodic Movement Data involve spatio-temporal aggregation of visits and flows, cluster analyses and dependency models. For pedestrian mobility data collection we further developed and successfully applied the recently evolved Bluetooth tracking technology. The introduced methods are combined to a system for pedestrian mobility analysis which comprises three layers. The Sensor Layer (1) monitors geo-coded sensor recordings on people’s presence and hands this episodic movement data in as input to the next layer. By use of standardized Open Geographic Consortium (OGC) compliant interfaces for data collection, we support seamless integration of various sensor technologies depending on the application requirements. The Query Layer (2) interacts with the user, who could ask for analyses within a given region and a certain time interval. Results are returned to the user in OGC conform Geography Markup Language (GML) format. The user query triggers the (3) Analysis Layer which utilizes the mobility model for pedestrian volume estimation. The proposed approach is promising for location performance evaluation and attractor identification. Thus, it was successfully applied to numerous industrial applications: Zurich central train station, the zoo of Duisburg (Germany) and a football stadium (Stade des Costiùres Nümes, France)

    Erfassung und Behandlung von Positionsfehlern in standortbasierter Autorisierung

    Get PDF
    Durch die immer grĂ¶ĂŸeren technischen Möglichkeiten mobiler EndgerĂ€te sind die Voraussetzungen erfĂŒllt, um diese zum mobilen Arbeiten oder zur Steuerung von industriellen Fertigungsprozessen einzusetzen. Aus GrĂŒnden der Informations- und Betriebssicherheit, sowie zur Umsetzung funktionaler Anforderungen, ist es aber vielfach erforderlich, die VerfĂŒgbarkeit von entsprechenden Zugriffsrechten auf Nutzer innerhalb autorisierter Zonen zu begrenzen. So kann z.B. das Auslesen kritischer Daten auf individuelle BĂŒros oder die mobile Steuerung von Maschinen auf passende Orte innerhalb einer Fabrikhalle beschrĂ€nkt werden. Dazu muss die Position des Nutzers ermittelt werden. Im realen Einsatz können PositionsschĂ€tzungen jedoch mit Fehlern in der GrĂ¶ĂŸe von autorisierten Zonen auftreten. Derzeit existieren noch keine Lösungen, welche diese Fehler in Autorisierungsentscheidungen berĂŒcksichtigen, um einhergehenden Schaden aus Falschentscheidungen zu minimieren. Ferner existieren derzeit keine Verfahren, um die GĂŒteeigenschaften solcher OrtsbeschrĂ€nkungen vor deren Ausbringung zu analysieren und zu entscheiden, ob ein gegebenes Positionierungssystem aufgrund der GrĂ¶ĂŸe seiner Positionsfehler geeignet ist. In der vorliegenden Arbeit werden deshalb Lösungen zur Erfassung und Behandlung solcher Positionsfehler im Umfeld der standortbasierten Autorisierung vorgestellt. Hierzu wird zunĂ€chst ein SchĂ€tzverfahren fĂŒr Positionsfehler in musterbasierten Positionierungsverfahren eingefĂŒhrt, das aus den Charakteristika der durchgefĂŒhrten Messungen eine Verteilung fĂŒr den Standort des Nutzers ableitet. Um hieraus effizient die Aufenthaltswahrscheinlichkeit innerhalb einer autorisierten Zone zu bestimmen, wird ein Algorithmus vorgestellt, der basierend auf Vorberechnungen eine erhebliche Verbesserung der Laufzeit gegenĂŒber der direkten Berechnung erlaubt. Erstmals wird eine umfassende GegenĂŒberstellung von existierenden standortbasierten Autorisierungsstrategien auf Basis der Entscheidungstheorie vorgestellt. Mit der risikobasierten Autorisierungsstrategie wird eine neue, aus entscheidungstheoretischer Sicht optimale Methodik eingefĂŒhrt. Es werden AnsĂ€tze zur Erweiterung klassischer Zugriffskontrollmodelle durch OrtsbeschrĂ€nkungen vorgestellt, welche bei ihrer Durchsetzung die Möglichkeit von Positionsfehlern und die Konsequenzen von Falschentscheidungen berĂŒcksichtigen. Zur Spezifikation autorisierter Zonen werden Eigenschaftsmodelle eingefĂŒhrt, die, im Gegensatz zu herkömmlichen Polygonen, fĂŒr jeden Ort die Wahrscheinlichkeit modellieren, dort eine geforderte Eigenschaft zu beobachten. Es werden ferner Methoden vorgestellt, um den Einfluss von Messausreißern auf Autorisierungsentscheidungen zu reduzieren. Ferner werden Analyseverfahren eingefĂŒhrt, die fĂŒr ein gegebenes Szenario eine qualitative und quantitative Bewertung der Eignung von Positionierungssystemen erlauben. Die quantitative Bewertung basiert auf dem entwickelten Konzept der Autorisierungsmodelle. Diese geben fĂŒr jeden Standort die Wahrscheinlichkeit an, dort eine PositionsschĂ€tzung zu erhalten, die zur Autorisierung fĂŒhrt. Die qualitative Bewertung bietet erstmals ein binĂ€res Kriterium, um fĂŒr ein gegebenes Szenario eine konkrete Aussage bzgl. der Eignung eines Positionierungssystems treffen zu können. Die Einsetzbarkeit dieses Analyseverfahrens wird an einer Fallstudie verdeutlicht und zeigt die Notwendigkeit einer solchen Analyse bereits vor der Ausbringung von standortbasierter Autorisierung. Es wird gezeigt, dass fĂŒr typische Positionierungssysteme durch die entwickelten risikobasierten Verfahren eine erhebliche Reduktion von Schaden aus Falschentscheidungen möglich ist und die Einsetzbarkeit der standortbasierten Autorisierung somit verbessert werden kann.The increasing technical capabilities of mobile devices allow a broad range of new applications. For example, employees are allowed to work mobile or industrial production processes can be remotely controlled via the mobile. For reasons of information security and operational safety, as well as for implementing functional requirements, often the availability of according access rights needs to be restricted to users within an authorized zone. Thus, access to sensitive data can be bound to users within particular offices, or the remote control of industrial machines can be restricted to safe regions within the factory building. For that purpose, the position of the user needs to be determined. Unfortunately, positioning errors in the size of authorized zones can arise during operation. Up to now, there are no approaches that handle those positioning errors when access rights are derived in a way, that minimizes negative consequences of possibly false authorization decisions. Furthermore, there are no methods to analyze the quality of such location constraints in the forefront of their deployment with a specific positioning system. Thus, it is left unclear, if its positioning errors are acceptable in the according scenario. In order to solve these problems, this thesis presents approaches to comprehend and handle positioning errors in the field of location-based access control. First of all, an error estimator for pattern-based positioning systems is introduced that employes characteristics of conducted position measurements. A probability density function (pdf) is derived in order to model the user's real position. This pdf can be used to derive the probability that a user is within the authorized zone. An algorithm is presented that employes precomputations to derive this probability. It allows for highly increased performance compared to the direct computation. For the first time, a detailed comparison of existing strategies for location-based access control is presented based on decision theory. The risk-based strategy is introduced, which is a novel method that is optimal from decision theory's point of view. Several approaches are presented that allow the assignment of location constraints to access control policies. When enforced, those constraints respect risk stemming from uncertain position measurements and possible damage of false authorization decisions. Feature models are introduced as a generalization of polygons for the specification of location constraints. For each geographic point, those models describe the probability that a required feature can be observed. Furthermore, a method is presented that allows to reduce the impact of measurement outliers on authorization decisions. At last, methods are presented that allow for a qualitative and quantitative rating of positioning systems for a given scenario. The quantitative rating is based on the novel concept of authorization models. Those models describe the probabiltiy for each geographic point, that a user at this point gets a position estimate that leads to an authorization. The qualitative rating represents a binary criteria to judge the suitability of a positioning system in a given scenario. The applicability of this method is demonstrated by a case study. This case study also brings up the necessity of such an analysis already before location-based access control is deployed. It is shown that for typical positioning systems the damage caused by false authorization decisions can be highly reduced by using the developed risk-based strategy. Finally, this improves the applicability of location-based access control, when positioning errors are non-negligible

    Erfassung und Behandlung von Positionsfehlern in standortbasierter Autorisierung

    Get PDF
    Durch die immer grĂ¶ĂŸeren technischen Möglichkeiten mobiler EndgerĂ€te sind die Voraussetzungen erfĂŒllt, um diese zum mobilen Arbeiten oder zur Steuerung von industriellen Fertigungsprozessen einzusetzen. Aus GrĂŒnden der Informations- und Betriebssicherheit, sowie zur Umsetzung funktionaler Anforderungen, ist es aber vielfach erforderlich, die VerfĂŒgbarkeit von entsprechenden Zugriffsrechten auf Nutzer innerhalb autorisierter Zonen zu begrenzen. So kann z.B. das Auslesen kritischer Daten auf individuelle BĂŒros oder die mobile Steuerung von Maschinen auf passende Orte innerhalb einer Fabrikhalle beschrĂ€nkt werden. Dazu muss die Position des Nutzers ermittelt werden. Im realen Einsatz können PositionsschĂ€tzungen jedoch mit Fehlern in der GrĂ¶ĂŸe von autorisierten Zonen auftreten. Derzeit existieren noch keine Lösungen, welche diese Fehler in Autorisierungsentscheidungen berĂŒcksichtigen, um einhergehenden Schaden aus Falschentscheidungen zu minimieren. Ferner existieren derzeit keine Verfahren, um die GĂŒteeigenschaften solcher OrtsbeschrĂ€nkungen vor deren Ausbringung zu analysieren und zu entscheiden, ob ein gegebenes Positionierungssystem aufgrund der GrĂ¶ĂŸe seiner Positionsfehler geeignet ist. In der vorliegenden Arbeit werden deshalb Lösungen zur Erfassung und Behandlung solcher Positionsfehler im Umfeld der standortbasierten Autorisierung vorgestellt. Hierzu wird zunĂ€chst ein SchĂ€tzverfahren fĂŒr Positionsfehler in musterbasierten Positionierungsverfahren eingefĂŒhrt, das aus den Charakteristika der durchgefĂŒhrten Messungen eine Verteilung fĂŒr den Standort des Nutzers ableitet. Um hieraus effizient die Aufenthaltswahrscheinlichkeit innerhalb einer autorisierten Zone zu bestimmen, wird ein Algorithmus vorgestellt, der basierend auf Vorberechnungen eine erhebliche Verbesserung der Laufzeit gegenĂŒber der direkten Berechnung erlaubt. Erstmals wird eine umfassende GegenĂŒberstellung von existierenden standortbasierten Autorisierungsstrategien auf Basis der Entscheidungstheorie vorgestellt. Mit der risikobasierten Autorisierungsstrategie wird eine neue, aus entscheidungstheoretischer Sicht optimale Methodik eingefĂŒhrt. Es werden AnsĂ€tze zur Erweiterung klassischer Zugriffskontrollmodelle durch OrtsbeschrĂ€nkungen vorgestellt, welche bei ihrer Durchsetzung die Möglichkeit von Positionsfehlern und die Konsequenzen von Falschentscheidungen berĂŒcksichtigen. Zur Spezifikation autorisierter Zonen werden Eigenschaftsmodelle eingefĂŒhrt, die, im Gegensatz zu herkömmlichen Polygonen, fĂŒr jeden Ort die Wahrscheinlichkeit modellieren, dort eine geforderte Eigenschaft zu beobachten. Es werden ferner Methoden vorgestellt, um den Einfluss von Messausreißern auf Autorisierungsentscheidungen zu reduzieren. Ferner werden Analyseverfahren eingefĂŒhrt, die fĂŒr ein gegebenes Szenario eine qualitative und quantitative Bewertung der Eignung von Positionierungssystemen erlauben. Die quantitative Bewertung basiert auf dem entwickelten Konzept der Autorisierungsmodelle. Diese geben fĂŒr jeden Standort die Wahrscheinlichkeit an, dort eine PositionsschĂ€tzung zu erhalten, die zur Autorisierung fĂŒhrt. Die qualitative Bewertung bietet erstmals ein binĂ€res Kriterium, um fĂŒr ein gegebenes Szenario eine konkrete Aussage bzgl. der Eignung eines Positionierungssystems treffen zu können. Die Einsetzbarkeit dieses Analyseverfahrens wird an einer Fallstudie verdeutlicht und zeigt die Notwendigkeit einer solchen Analyse bereits vor der Ausbringung von standortbasierter Autorisierung. Es wird gezeigt, dass fĂŒr typische Positionierungssysteme durch die entwickelten risikobasierten Verfahren eine erhebliche Reduktion von Schaden aus Falschentscheidungen möglich ist und die Einsetzbarkeit der standortbasierten Autorisierung somit verbessert werden kann.The increasing technical capabilities of mobile devices allow a broad range of new applications. For example, employees are allowed to work mobile or industrial production processes can be remotely controlled via the mobile. For reasons of information security and operational safety, as well as for implementing functional requirements, often the availability of according access rights needs to be restricted to users within an authorized zone. Thus, access to sensitive data can be bound to users within particular offices, or the remote control of industrial machines can be restricted to safe regions within the factory building. For that purpose, the position of the user needs to be determined. Unfortunately, positioning errors in the size of authorized zones can arise during operation. Up to now, there are no approaches that handle those positioning errors when access rights are derived in a way, that minimizes negative consequences of possibly false authorization decisions. Furthermore, there are no methods to analyze the quality of such location constraints in the forefront of their deployment with a specific positioning system. Thus, it is left unclear, if its positioning errors are acceptable in the according scenario. In order to solve these problems, this thesis presents approaches to comprehend and handle positioning errors in the field of location-based access control. First of all, an error estimator for pattern-based positioning systems is introduced that employes characteristics of conducted position measurements. A probability density function (pdf) is derived in order to model the user's real position. This pdf can be used to derive the probability that a user is within the authorized zone. An algorithm is presented that employes precomputations to derive this probability. It allows for highly increased performance compared to the direct computation. For the first time, a detailed comparison of existing strategies for location-based access control is presented based on decision theory. The risk-based strategy is introduced, which is a novel method that is optimal from decision theory's point of view. Several approaches are presented that allow the assignment of location constraints to access control policies. When enforced, those constraints respect risk stemming from uncertain position measurements and possible damage of false authorization decisions. Feature models are introduced as a generalization of polygons for the specification of location constraints. For each geographic point, those models describe the probability that a required feature can be observed. Furthermore, a method is presented that allows to reduce the impact of measurement outliers on authorization decisions. At last, methods are presented that allow for a qualitative and quantitative rating of positioning systems for a given scenario. The quantitative rating is based on the novel concept of authorization models. Those models describe the probabiltiy for each geographic point, that a user at this point gets a position estimate that leads to an authorization. The qualitative rating represents a binary criteria to judge the suitability of a positioning system in a given scenario. The applicability of this method is demonstrated by a case study. This case study also brings up the necessity of such an analysis already before location-based access control is deployed. It is shown that for typical positioning systems the damage caused by false authorization decisions can be highly reduced by using the developed risk-based strategy. Finally, this improves the applicability of location-based access control, when positioning errors are non-negligible

    Sensor Fusion for Location Estimation Technologies

    No full text
    Location estimation performance is not always satisfactory and improving it can be expensive. The performance of location estimation technology can be increased by refining the existing location estimation technologies. A better way of increasing performance is to use multiple technologies and combine the available data provided by them in order to obtain better results. Also, maintaining one's location privacy while using location estimation technology is a challenge. How can this problem be solved? In order to make it easier to perform sensor fusion on the available data and to speed up development, a flexible framework centered around a component-based architecture was designed. In order to test the performance of location estimation using the proposed sensor fusion framework, the framework and all the necessary components were implemented and tested. In order to solve the location estimation privacy issues, a comprehensive design that considers all aspects of the problem, from the physical aspects of using radio transmissions to communicating and using location data, is proposed. The experimental results of testing the location estimation sensor fusion framework show that by using sensor fusion, the availability of location estimation is always increased and the accuracy is always increased on average. The experimental results also allow the profiling of the sensor fusion framework's time and energy consumption. In the case of time consumption, there is a 0.32% - 17.06% - 5.05% - 77.58% split between results overhead, engine overhead, component communication time and component execution time on an average. The more measurements are gathered by the data gathering components, the more the component execution time increases relative to all the other execution times because component execution time is the only one that increases while the others remain constant

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion
    corecore