4,897 research outputs found
Doctor of Philosophy
dissertationThe wireless radio channel is typically thought of as a means to move information from transmitter to receiver, but the radio channel can also be used to detect changes in the environment of the radio link. This dissertation is focused on the measurements we can make at the physical layer of wireless networks, and how we can use those measurements to obtain information about the locations of transceivers and people. The first contribution of this work is the development and testing of an open source, 802.11b sounder and receiver, which is capable of decoding packets and using them to estimate the channel impulse response (CIR) of a radio link at a fraction of the cost of traditional channel sounders. This receiver improves on previous implementations by performing optimized matched filtering on the field-programmable gate array (FPGA) of the Universal Software Radio Peripheral (USRP), allowing it to operate at full bandwidth. The second contribution of this work is an extensive experimental evaluation of a technology called location distinction, i.e., the ability to identify changes in radio transceiver position, via CIR measurements. Previous location distinction work has focused on single-input single-output (SISO) radio links. We extend this work to the context of multiple-input multiple-output (MIMO) radio links, and study system design trade-offs which affect the performance of MIMO location distinction. The third contribution of this work introduces the "exploiting radio windows" (ERW) attack, in which an attacker outside of a building surreptitiously uses the transmissions of an otherwise secure wireless network inside of the building to infer location information about people inside the building. This is possible because of the relative transparency of external walls to radio transmissions. The final contribution of this dissertation is a feasibility study for building a rapidly deployable radio tomographic (RTI) imaging system for special operations forces (SOF). We show that it is possible to obtain valuable tracking information using as few as 10 radios over a single floor of a typical suburban home, even without precise radio location measurements
Recommended from our members
Computational and Analytical Tools for Resilient and Secure Power Grids
Enhancing power grids' performance and resilience has been one of the greatest challenges in engineering and science over the past decade. A recent report by the National Academies of Sciences, Engineering, and Medicine along with other studies emphasizes the necessity of deploying new ideas and mathematical tools to address the challenges facing the power grids now and in the future. To full this necessity, numerous grid modernization programs have been initiated in recent years. This thesis focuses on one of the most critical challenges facing power grids which is their vulnerability against failures and attacks. Our approach bridges concepts in power engineering and computer science to improve power grids resilience and security. We analyze the vulnerability of power grids to cyber and physical attacks and failures, design efficient monitoring schemes for robust state estimation, develop algorithms to control the grid under tension, and introduce methods to generate realistic power grid test cases. Our contributions can be divided into four major parts:
Power Grid State Prediction: Large scale power outages in Australia (2016), Ukraine (2015), Turkey (2015), India (2013), and the U.S. (2011, 2003) have demonstrated the vulnerability of power grids to cyber and physical attacks and failures. Power grid outages have devastating effects on almost every aspect of modern life as well as on interdependent systems. Despite their inevitability, the effects of failures on power grids' performance can be limited if the system operator can predict and understand the consequences of an initial failure and can immediately detect the problematic failures. To enable these capabilities, we study failures in power grids using computational and analytical tools based on the DC power flow model. We introduce new metrics to efficiently evaluate the severity of an initial failure and develop efficient algorithms to predict its consequences. We further identify power grids' vulnerabilities using these metrics and algorithms.
Power Grid State Estimation: In order to obtain an accurate prediction of the subsequent effects of an initial failure on the performance of the grid, the system operator needs to exactly know when and where the initial failure has happened. However, due to lack of enough measurement devices or a cyber attack on the grid, such information may not be available directly to the grid operator via measurements. To address this problem, we develop efficient methods to estimate the state of the grid and detect failures (if any) from partial available information.
Power Grid Control: Once an initial failure is detected, prediction methods can be used to predict the subsequent effects of that failure. If the initial failure is causing a cascade of failures in the grid, a control mechanism needs to be applied in order to mitigate its further effects. Power Grid Islanding is an effective method to mitigate cascading failures. The challenge is to partition the network into smaller connected components, called islands, so that each island can operate independently for a short period of time. This is to prevent the system to be separated into unbalanced parts due to cascading failures. To address this problem, we introduce and study the Doubly Balanced Connected graph Partitioning (DBCP) problem and provide an efficient algorithm to partition the power grid into two operating islands.
Power Grid Test Cases for Evaluation: In order to evaluate algorithms that are developed for enhancing power grids resilience, one needs to study their performance on the real grid data. However, due to security reasons, such data sets are not publicly available and are very hard to obtain. Therefore, we study the structural properties of the U.S. Western Interconnection grid (WI), and based on the results we present the Network Imitating Method Based on LEarning (NIMBLE) for generating synthetic spatially embedded networks with similar properties to a given grid. We apply NIMBLE to the WI and show that the generated network has similar structural and spatial properties as well as the same level of robustness to cascading failures.
Overall, the results provided in this thesis advance power grids' resilience and security by providing a better understanding of the system and by developing efficient algorithms to protect it at the time of failure
Performance Metrics for Network Intrusion Systems
Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies.
A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging.
Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt
Governance of Dual-Use Technologies: Theory and Practice
The term dual-use characterizes technologies that can have both military and civilian applications. What is the state of current efforts to control the spread of these powerful technologies—nuclear, biological, cyber—that can simultaneously advance social and economic well-being and also be harnessed for hostile purposes? What have previous efforts to govern, for example, nuclear and biological weapons taught us about the potential for the control of these dual-use technologies? What are the implications for governance when the range of actors who could cause harm with these technologies include not just national governments but also non-state actors like terrorists? These are some of the questions addressed by Governance of Dual-Use Technologies: Theory and Practice, the new publication released today by the Global Nuclear Future Initiative of the American Academy of Arts and Sciences. The publication's editor is Elisa D. Harris, Senior Research Scholar, Center for International Security Studies, University of Maryland School of Public Affairs. Governance of Dual-Use Technologies examines the similarities and differences between the strategies used for the control of nuclear technologies and those proposed for biotechnology and information technology. The publication makes clear the challenges concomitant with dual-use governance. For example, general agreement exists internationally on the need to restrict access to technologies enabling the development of nuclear weapons. However, no similar consensus exists in the bio and information technology domains. The publication also explores the limitations of military measures like deterrence, defense, and reprisal in preventing globally available biological and information technologies from being misused. Some of the other questions explored by the publication include: What types of governance measures for these dual-use technologies have already been adopted? What objectives have those measures sought to achieve? How have the technical characteristics of the technology affected governance prospects? What have been the primary obstacles to effective governance, and what gaps exist in the current governance regime? Are further governance measures feasible? In addition to a preface from Global Nuclear Future Initiative Co-Director Robert Rosner (University of Chicago) and an introduction and conclusion from Elisa Harris, Governance of Dual-Use Technologiesincludes:On the Regulation of Dual-Use Nuclear Technology by James M. Acton (Carnegie Endowment for International Peace)Dual-Use Threats: The Case of Biotechnology by Elisa D. Harris (University of Maryland)Governance of Information Technology and Cyber Weapons by Herbert Lin (Stanford University
Data Mining
The availability of big data due to computerization and automation has generated an urgent need for new techniques to analyze and convert big data into useful information and knowledge. Data mining is a promising and leading-edge technology for mining large volumes of data, looking for hidden information, and aiding knowledge discovery. It can be used for characterization, classification, discrimination, anomaly detection, association, clustering, trend or evolution prediction, and much more in fields such as science, medicine, economics, engineering, computers, and even business analytics. This book presents basic concepts, ideas, and research in data mining
Understanding the extreme vulnerability of image classifiers to adversarial examples
State-of-the-art deep networks for image classification are vulnerable to adversarial examples—misclassified images which are obtained by applying imperceptible non-random perturbations to correctly classified test images. This vulnerability is somewhat paradoxical: how can these models perform so well, if they are so sensitive to small perturbations of their inputs? Two early but influential explanations focused on the high non-linearity of deep networks, and on the high-dimensionality of image space. We review these explanations and highlight their limitations, before introducing a new perspective according to which adversarial examples exist when the classification boundary lies close to the manifold of normal data. We present a detailed mathematical analysis of the new perspective in binary linear classification, where the adversarial vulnerability of a classifier can be reduced to the deviation angle between its weight vector and the weight vector of the nearest centroid classifier. This analysis leads us to identify two types of adversarial examples: those affecting optimal classifiers, which are limited by a fundamental robustness/accuracy trade-off, and those affecting sub-optimal classifiers, resulting from imperfect training procedures or overfitting. We then show that L2 regularization plays an important role in practice, by acting as a balancing mechanism between two objectives: the minimization of the error and the maximization of the adversarial distance over the training set. We finally generalize our considerations to deep neural networks, reinterpreting in particular weight decay and adversarial training as belonging to a same family of output regularizers. If designing models that are robust to small image perturbations remains challenging, we show in the last Chapter of this thesis that state-of-the-art networks can easily be made more vulnerable. Reversing the problem in this way exposes new attack scenarios and, crucially, helps improve our understanding of the adversarial example phenomenon by emphasizing the role played by low variance directions
Sailbot 2017-2018
The goal of this MQP was to build and program a robot capable of competing in the 2018 International Robotic Sailing Competition (IRSC), also known as Sailbot. This project utilized existing research on control and design of autonomous sailboats, and built on lessons learned from the last two years of WPI’s Sailbot entries. The final product of this MQP was a more reliable, easier to control, and more innovative design than last year’s event-winning boat
- …