810 research outputs found

    DSI: A Fully Distributed Spatial Index for Wireless Data Broadcast

    Get PDF

    Communication for Teams of Networked Robots

    Get PDF
    There are a large class of problems, from search and rescue to environmental monitoring, that can benefit from teams of mobile robots in environments where there is no existing infrastructure for inter-agent communication. We seek to address the problems necessary for a team of small, low-power, low-cost robots to deploy in such a way that they can dynamically provide their own multi-hop communication network. To do so, we formulate a situational awareness problem statement that specifies both the physical task and end-to-end communication rates that must be maintained. In pursuit of a solution to this problem, we address topics ranging from the modeling of point-to-point wireless communication to mobility control for connectivity maintenance. Since our focus is on developing solutions to these problems that can be experimentally verified, we also detail the design and implantation of a decentralized testbed for multi-robot research. Experiments on this testbed allow us to determine data-driven models for point-to-point wireless channel prediction, test relative signal-strength-based localization methods, and to verify that our algorithms for mobility control maintain the desired instantaneous rates when routing through the wireless network. The tools we develop are integral to the fielding of teams of robots with robust wireless network capabilities

    Design and implimentationof Multi-user MIMO precoding algorithms

    Get PDF
    The demand for high-speed communications required by cutting-edge applications has put a strain on the already saturated wireless spectrum. The incorporation of antenna arrays at both ends of the communication link has provided improved spectral efficiency and link reliability to the inherently complex wireless environment, thus allowing for the thriving of high data-rate applications without the cost of extra bandwidth consumption. As a consequence to this, multiple-input multiple-output (MIMO) systems have become the key technology for wideband communication standards both in single-user and multi-user setups. The main difficulty in single-user MIMO systems stems from the signal detection stage at the receiver, whereas multi-user downlink systems struggle with the challenge of enabling non-cooperative signal acquisition at the user terminals. In this respect, precoding techniques perform a pre-equalization stage at the base station so that the signal at each receiver can be interpreted independently and without the knowledge of the overall channel state. Vector precoding (VP) has been recently proposed for non-cooperative signal acquisition in the multi-user broadcast channel. The performance advantage with respect to the more straightforward linear precoding algorithms is the result of an added perturbation vector which enhances the properties of the precoded signal. Nevertheless, the computation of the perturbation signal entails a search for the closest point in an in nite lattice, which is known to be in the class of non-deterministic polynomial-time hard (NP-hard) problems. This thesis addresses the difficulties that stem from the perturbation process in VP systems from both theoretical and practical perspectives. On one hand, the asymptotic performance of VP is analyzed assuming optimal decoding. Since the perturbation process hinders the analytical assessment of the VP performance, lower and upper bounds on the expected data rate are reviewed and proposed. Based on these bounds, VP is compared to linear precoding with respect to the performance after a weighted sum rate optimization, the power resulting from a quality of service (QoS) formulation, and the performance when balancing the user rates. On the other hand, the intricacies of performing an efficient computation of the perturbation vector are analyzed. This study is focused on tree-search techniques that, by means of an strategic node pruning policy, reduce the complexity derived from an exhaustive search and yield a close-to-optimum performance. To that respect, three tree-search algorithms are proposed. The xed-sphere encoder (FSE) features a constant data path and a non-iterative architecture that enable the parallel processing of the set of vector hypotheses and thus, allow for high-data processing rates. The sequential best-node expansion (SBE) algorithm applies a distance control policy to reduce the amount of metric computations performed during the tree traversal. Finally, the low-complexity SBE (LC-SBE) aims at reducing the complexity and latency of the aforementioned algorithm by combining an approximate distance computation model and a novel approach of variable run-time constraints. Furthermore, the hardware implementation of non-recursive tree-search algorithms for the precoding scenario is also addressed in this thesis. More specifically, the hardware architecture design and resource occupation of the FSE and K-Best xed-complexity treesearch techniques are presented. The determination of the ordered sequence of complexvalued nodes, also known as the Schnorr-Euchner enumeration, is required in order to select the nodes to be evaluated during the tree traversal. With the aim of minimizing the hardware resource demand of such a computationally-expensive task, a novel non-sequential and lowcomplexity enumeration algorithm is presented, which enables the independent selection of the nodes within the ordered sequence. The incorporation of the proposed enumeration technique along with a fully-pipelined architecture of the FSE and K-Best approaches, allow for data processing throughputs of up to 5 Gbps in a 4x4 antenna setup.Aplikazio abangoardistek beharrezko duten abiadura handiko komunikazioen eskaerak presio handia ezarri du dagoeneko saturatuta dagoen haririk gabeko espektruan. Komunikazio loturaren bi muturretan antena array-en erabilerak eraginkortasun espektral eta dagarritasun handiagoez hornitu du berez konplexua den haririk gabeko ingurunea, modu honetan banda zabalera gehigarririk gabeko abiadura handiko aplikazioen garapena ahalbidetuz. Honen ondorioz, multiple-input multiple output (MIMO) sistemak banda zabaleko komunikazio estandarren funtsezko teknologia bihurtu dira, erabiltzaile bakarreko ezarpenetan hala nola erabiltzaile anitzeko inguruneetan. Erabiltzaile bakarreko MIMO sistemen zailtasun garrantzitsuena hartzailean ematen den seinalearen detekzio fasean datza. Erabiltzaile anitzeko sistemetan, aldiz, erronka nagusiena datu jasotze ez kooperatiboa bermatzea da. Prekodi kazio teknikek hartzaile bakoitzaren seinalea kanalaren egoera orokorraren ezagutzarik gabe eta modu independiente baten interpretatzea ahalbidetzen dute estazio nagusian seinalearen pre-ekualizazio fase bat inposatuz. Azken aldian, prekodi kazio bektoriala (VP, ingelesez vector precoding) proposatu da erabiltzaile anitzeko igorpen kanalean seinalearen eskuratze ez kooperatiboa ahalbidetzeko. Perturbazio seinale baten erabilerak, prekodi katutako seinalearen ezaugarriak hobetzeaz gain, errendimenduaren hobekuntza nabarmen bat lortzen du prekodi kazio linearreko teknikekiko. Hala ere, perturbazio seinalearen kalkuluak sare in nitu baten puntu hurbilenaren bilaketa suposatzen du. Problema honen ebazpenaren konplexutasuna denbora polinomialean ez deterministikoa dela jakina da. Doktoretza tesi honen helburu nagusia VP sistemetan perturbazio prozesuaren ondorioz ematen diren zailtasun teoriko eta praktikoei irtenbide egoki bat ematea da. Alde batetik, seinale/zarata ratio handiko ingurunetan VP sistemen errendimendua aztertzen da, beti ere deskodetze optimoa ematen dela suposatuz. Perturbazio prozesuak VP sistemen errendimenduaren azterketa analitikoa oztopatzen duenez, data transmisio tasaren hainbat goi eta behe borne proposatu eta berrikusi dira. Borne hauetan oinarrituz, VP eta prekodi kazio linealaren arteko errendimendu desberdintasuna neurtu da hainbat aplikazio ezberdinen eremuan. Konkretuki, kanalaren ahalmen ponderatua, zerbitzu kalitatearen formulazio baten ondorioz esleitzen den seinale potentzia eta erabiltzaileen datu transmisio tasa orekatzean lortzen den errendimenduaren azterketa burutu dira. Beste alde batetik, perturbazio bektorearen kalkulu eraginkorra lortzeko metodoak ere aztertu dira. Analisi hau zuhaitz-bilaketa tekniketan oinarritzen da, non egitura sinple baten bitartez errendimendu ia optimoa lortzen den. Ildo horretan, hiru zuhaitz-bilaketa algoritmo proposatu dira. Alde batetik, Fixed-sphere encoder-aren (FSE) konplexutasun konstateak eta arkitektura ez errekurtsiboak datu prozesaketa abiadura handiak lortzea ahalbidetzen dute. Sequential best-node expansion (SBE) delako algoritmo iteratiboak ordea, distantzia kontrol politika baten bitartez metrika kalkuluen kopurua murriztea lortzen du. Azkenik, low-complexity SBE (LC-SBE) algoritmoak SBE metodoaren latentzia eta konplexutasuna murriztea lortzen du ordezko distantzien kalkuluari eta exekuzio iraupenean ezarritako muga aldakorreko metodo berri bati esker. Honetaz gain, prekodi kazio sistementzako zuhaitz-bilaketa algoritmo ez errekurtsiboen hardware inplementazioa garatu da. Zehazki, konplexutasun nkoko FSE eta K-Best algoritmoen arkitektura diseinua eta hardware baliabideen erabilera landu dira. Balio konplexuko nodoen sekuentzia ordenatua, Schnorr-Euchner zerrendapena bezala ezagutua, funtsezkoa da zuhaitz bilaketan erabiliko diren nodoen aukeraketa egiteko. Prozesu honek beharrezkoak dituen hardware baliabideen eskaera murrizteko, konplexutasun bajuko algoritmo ez sekuentzial bat proposatzen da. Metodo honen bitartez, sekuentzia ordenatuko edozein nodoren aukeraketa independenteki egin ahal da. Proposatutako zerrendapen metodoa eta estruktura fully-pipeline baten bitartez, 5 Gbps-ko datu prozesaketa abiadura lortu daiteke FSE eta K-Best delako algoritmoen inplementazioan.La demanda de comunicaciones de alta velocidad requeridas por las aplicaciones más vanguardistas ha impuesto una presión sobre el actualmente saturado espectro inalámbrico. La incorporación de arrays de antenas en ambos extremos del enlace de comunicación ha proporcionado una mayor e ciencia espectral y abilidad al inherentemente complejo entorno inalámbrico, permitiendo así el desarrollo de aplicaciones de alta velocidad de transmisión sin un consumo adicional de ancho de banda. Consecuentemente, los sistemas multiple-input multiple output (MIMO) se han convertido en la tecnología clave para los estándares de comunicación de banda ancha, tanto en las con guraciones de usuario único como en los entornos multiusuario. La principal di cultad presente en los sistemas MIMO de usuario único reside en la etapa de detección de la señal en el extremo receptor, mientras que los sistemas multiusuario en el canal de bajada se enfrentan al reto de habilitar la adquisición de datos no cooperativa en los terminales receptores. A tal efecto, las técnicas de precodi cación realizan una etapa de pre-ecualización en la estación base de tal manera que la señal en cada receptor se pueda interpretar independientemente y sin el conocimiento del estado general del canal. La precodifi cación vectorial (VP, del inglés vector precoding) se ha propuesto recientemente para la adquisición no cooperativa de la señal en el canal de difusión multiusuario. La principal ventaja de la incorporación de un vector de perturbación es una considerable mejora en el rendimiento con respecto a los métodos de precodi cación lineales. Sin embargo, la adquisición de la señal de perturbación implica la búsqueda del punto más cercano en un reticulado in nito. Este problema se considera de complejidad no determinística en tiempo polinomial o NP-complejo. Esta tesis aborda las di cultades que se derivan del proceso de perturbación en sistemas VP desde una perspectiva tanto teórica como práctica. Por un lado, se analiza el rendimiento de VP asumiendo una decodi cación óptima en escenarios de alta relación señal a ruido. Debido a que el proceso de perturbación di culta la evaluación analítica del rendimiento de los sistemas de VP, se proponen y revisan diversas cotas superiores e inferiores en la tasa esperada de transmisión de estos sistemas. En base a estas cotas, se realiza una comparación de VP con respecto a la precodi cación lineal en el ámbito de la capacidad suma ponderada, la potencia resultante de una formulación de calidad de servicio y el rendimiento obtenido al equilibrar las tasas de transmisión de los usuarios. Por otro lado, se han propuesto nuevos procedimientos para un cómputo e ciente del vector de perturbación. Estos métodos se basan en técnicas de búsqueda en árbol que, por medio de diferentes políticas de podado, reducen la complejidad derivada de una búsqueda exhaustiva y obtienen un rendimiento cercano al óptimo. A este respecto, se proponen tres algoritmos de búsqueda en árbol. El xed-sphere encoder (FSE) cuenta con una complejidad constante y una arquitectura no iterativa, lo que permite el procesamiento paralelo de varios vectores candidatos, lo que a su vez deriva en grandes velocidades de procesamiento de datos. El algoritmo iterativo denominado sequential best-node expansion (SBE) aplica una política de control de distancias para reducir la cantidad de cómputo de métricas realizadas durante la búsqueda en árbol. Por último, el low-complexity SBE (LC-SBE) tiene por objetivo reducir la complejidad y latencia del algoritmo anterior mediante la combinación de un modelo de cálculo aproximado de distancias y una estrategia novedosa de restricción variable del tiempo de ejecución. Adicionalmente, se analiza la implementación en hardware de algoritmos de búsqueda en árbol no iterativos para los escenarios de precodi cación. Más especí camente, se presentan el diseño de la arquitectura y la ocupación de recursos de hardware de las técnicas de complejidad ja FSE y K-Best. La determinación de la secuencia ordenada de nodos de naturaleza compleja, también conocida como la enumeración de Schnorr-Euchner, es vital para seleccionar los nodos evaluados durante la búsqueda en árbol. Con la intención de reducir al mínimo la demanda de recursos de hardware de esta tarea de alta carga computacional, se presenta un novedoso algoritmo no secuencial de baja complejidad que permite la selección independiente de los nodos dentro de la secuencia ordenada. La incorporación de la técnica de enumeración no secuencial junto con la arquitectura fully-pipeline de los algoritmos FSE y K-Best, permite alcanzar velocidades de procesamiento de datos de hasta 5 Gbps para un sistema de 4 antenas receptoras

    Improving a wireless localization system via machine learning techniques and security protocols

    Get PDF
    The recent advancements made in Internet of Things (IoT) devices have brought forth new opportunities for technologies and systems to be integrated into our everyday life. In this work, we investigate how edge nodes can effectively utilize 802.11 wireless beacon frames being broadcast from pre-existing access points in a building to achieve room-level localization. We explain the needed hardware and software for this system and demonstrate a proof of concept with experimental data analysis. Improvements to localization accuracy are shown via machine learning by implementing the random forest algorithm. Using this algorithm, historical data can train the model and make more informed decisions while tracking other nodes in the future. We also include multiple security protocols that can be taken to reduce the threat of both physical and digital attacks on the system. These threats include access point spoofing, side channel analysis, and packet sniffing, all of which are often overlooked in IoT devices that are rushed to market. Our research demonstrates the comprehensive combination of affordability, accuracy, and security possible in an IoT beacon frame-based localization system that has not been fully explored by the localization research community

    Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks

    Get PDF
    Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network\u27s connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes\u27 interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer\u27s search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency

    Reliable Multicast in Mobile Ad Hoc Wireless Networks

    Get PDF
    A mobile wireless ad hoc network (MANET) consists of a group of mobile nodes communicating wirelessly with no fixed infrastructure. Each node acts as source or receiver, and all play a role in path discovery and packet routing. MANETs are growing in popularity due to multiple usage models, ease of deployment and recent advances in hardware with which to implement them. MANETs are a natural environment for multicasting, or group communication, where one source transmits data packets through the network to multiple receivers. Proposed applications for MANET group communication ranges from personal network apps, impromptu small scale business meetings and gatherings, to conference, academic or sports complex presentations for large crowds reflect the wide range of conditions such a protocol must handle. Other applications such as covert military operations, search and rescue, disaster recovery and emergency response operations reflect the mission critical nature of many ad hoc applications. Reliable data delivery is important for all categories, but vital for this last one. It is a feature that a MANET group communication protocol must provide. Routing protocols for MANETs are challenged with establishing and maintaining data routes through the network in the face of mobility, bandwidth constraints and power limitations. Multicast communication presents additional challenges to protocols. In this dissertation we study reliability in multicast MANET routing protocols. Several on-demand multicast protocols are discussed and their performance compared. Then a new reliability protocol, R-ODMRP is presented that runs on top of ODMRP, a well documented best effort protocol with high reliability. This protocol is evaluated against ODMRP in a standard network simulator, ns-2. Next, reliable multicast MANET protocols are discussed and compared. We then present a second new protocol, Reyes, also a reliable on-demand multicast communication protocol. Reyes is implemented in the ns-2 simulator and compared against the current standards for reliability, flooding and ODMRP. R-ODMRP is used as a comparison point as well. Performance results are comprehensively described for latency, bandwidth and reliable data delivery. The simulations show Reyes to greatly outperform the other protocols in terms of reliability, while also outperforming R-ODMRP in terms of latency and bandwidth overhead

    Malware detection based on dynamic analysis features

    Get PDF
    The widespread usage of mobile devices and their seamless adaptation to each users' needs by the means of useful applications (Apps), makes them a prime target for malware developers to get access to sensitive user data, such as banking details, or to hold data hostage and block user access. These apps are distributed in marketplaces that host millions and therefore have their own forms of automated malware detection in place in order to deter malware developers and keep their app store (and reputation) trustworthy, but there are still a number of apps that are able to bypass these detectors and remain available in the marketplace for any user to download. Current malware detection strategies rely mostly on using features extracted statically, dynamically or a conjunction of both, and making them suitable for machine learning applications, in order to scale detection to cover the number of apps that are submited to the marketplace. In this article, the main focus is the study of the effectiveness of these automated malware detection methods and their ability to keep up with the proliferation of new malware and its ever-shifting trends. By analising the performance of ML algorithms trained, with real world data, on diferent time periods and time scales with features extracted statically, dynamically and from user-feedback, we are able to identify the optimal setup to maximise malware detection.O uso generalizado de dispositivos móveis e sua adaptação perfeita às necessidades de cada utilizador por meio de aplicativos úteis (Apps) tornam-os um alvo principal para que criadores de malware obtenham acesso a dados confidenciais do usuário, como detalhes bancários, ou para reter dados e bloquear o acesso do utilizador. Estas apps são distribuídas em mercados que alojam milhões, e portanto, têm as suas próprias formas de detecção automatizada de malware, a fim de dissuadir os desenvolvedores de malware e manter sua loja de apps (e reputação) confiável, mas ainda existem várias apps capazes de ignorar esses detectores e permanecerem disponíveis no mercado para qualquer utilizador fazer o download. As estratégias atuais de detecção de malware dependem principalmente do uso de recursos extraídos estaticamente, dinamicamente ou de uma conjunção de ambos, e de torná-los adequados para aplicações de aprendizagem automática, a fim de dimensionar a detecção para cobrir o número de apps que são enviadas ao mercado. Neste artigo, o foco principal é o estudo da eficácia dos métodos automáticos de detecção de malware e as suas capacidades de acompanhar a popularidade de novo malware, bem como as suas tendências em constante mudança. Analisando o desempenho de algoritmos de ML treinados, com dados do mundo real, em diferentes períodos e escalas de tempo com recursos extraídos estaticamente, dinamicamente e com feedback do utilizador, é possível identificar a configuração ideal para maximizar a detecção de malware
    • …
    corecore