4,897 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe wireless radio channel is typically thought of as a means to move information from transmitter to receiver, but the radio channel can also be used to detect changes in the environment of the radio link. This dissertation is focused on the measurements we can make at the physical layer of wireless networks, and how we can use those measurements to obtain information about the locations of transceivers and people. The first contribution of this work is the development and testing of an open source, 802.11b sounder and receiver, which is capable of decoding packets and using them to estimate the channel impulse response (CIR) of a radio link at a fraction of the cost of traditional channel sounders. This receiver improves on previous implementations by performing optimized matched filtering on the field-programmable gate array (FPGA) of the Universal Software Radio Peripheral (USRP), allowing it to operate at full bandwidth. The second contribution of this work is an extensive experimental evaluation of a technology called location distinction, i.e., the ability to identify changes in radio transceiver position, via CIR measurements. Previous location distinction work has focused on single-input single-output (SISO) radio links. We extend this work to the context of multiple-input multiple-output (MIMO) radio links, and study system design trade-offs which affect the performance of MIMO location distinction. The third contribution of this work introduces the "exploiting radio windows" (ERW) attack, in which an attacker outside of a building surreptitiously uses the transmissions of an otherwise secure wireless network inside of the building to infer location information about people inside the building. This is possible because of the relative transparency of external walls to radio transmissions. The final contribution of this dissertation is a feasibility study for building a rapidly deployable radio tomographic (RTI) imaging system for special operations forces (SOF). We show that it is possible to obtain valuable tracking information using as few as 10 radios over a single floor of a typical suburban home, even without precise radio location measurements

    Performance Metrics for Network Intrusion Systems

    Get PDF
    Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt

    Governance of Dual-Use Technologies: Theory and Practice

    Get PDF
    The term dual-use characterizes technologies that can have both military and civilian applications. What is the state of current efforts to control the spread of these powerful technologies—nuclear, biological, cyber—that can simultaneously advance social and economic well-being and also be harnessed for hostile purposes? What have previous efforts to govern, for example, nuclear and biological weapons taught us about the potential for the control of these dual-use technologies? What are the implications for governance when the range of actors who could cause harm with these technologies include not just national governments but also non-state actors like terrorists? These are some of the questions addressed by Governance of Dual-Use Technologies: Theory and Practice, the new publication released today by the Global Nuclear Future Initiative of the American Academy of Arts and Sciences. The publication's editor is Elisa D. Harris, Senior Research Scholar, Center for International Security Studies, University of Maryland School of Public Affairs. Governance of Dual-Use Technologies examines the similarities and differences between the strategies used for the control of nuclear technologies and those proposed for biotechnology and information technology. The publication makes clear the challenges concomitant with dual-use governance. For example, general agreement exists internationally on the need to restrict access to technologies enabling the development of nuclear weapons. However, no similar consensus exists in the bio and information technology domains. The publication also explores the limitations of military measures like deterrence, defense, and reprisal in preventing globally available biological and information technologies from being misused. Some of the other questions explored by the publication include: What types of governance measures for these dual-use technologies have already been adopted? What objectives have those measures sought to achieve? How have the technical characteristics of the technology affected governance prospects? What have been the primary obstacles to effective governance, and what gaps exist in the current governance regime? Are further governance measures feasible? In addition to a preface from Global Nuclear Future Initiative Co-Director Robert Rosner (University of Chicago) and an introduction and conclusion from Elisa Harris, Governance of Dual-Use Technologiesincludes:On the Regulation of Dual-Use Nuclear Technology by James M. Acton (Carnegie Endowment for International Peace)Dual-Use Threats: The Case of Biotechnology by Elisa D. Harris (University of Maryland)Governance of Information Technology and Cyber Weapons by Herbert Lin (Stanford University

    Data Mining

    Get PDF
    The availability of big data due to computerization and automation has generated an urgent need for new techniques to analyze and convert big data into useful information and knowledge. Data mining is a promising and leading-edge technology for mining large volumes of data, looking for hidden information, and aiding knowledge discovery. It can be used for characterization, classification, discrimination, anomaly detection, association, clustering, trend or evolution prediction, and much more in fields such as science, medicine, economics, engineering, computers, and even business analytics. This book presents basic concepts, ideas, and research in data mining

    Understanding the extreme vulnerability of image classifiers to adversarial examples

    Get PDF
    State-of-the-art deep networks for image classification are vulnerable to adversarial examples—misclassified images which are obtained by applying imperceptible non-random perturbations to correctly classified test images. This vulnerability is somewhat paradoxical: how can these models perform so well, if they are so sensitive to small perturbations of their inputs? Two early but influential explanations focused on the high non-linearity of deep networks, and on the high-dimensionality of image space. We review these explanations and highlight their limitations, before introducing a new perspective according to which adversarial examples exist when the classification boundary lies close to the manifold of normal data. We present a detailed mathematical analysis of the new perspective in binary linear classification, where the adversarial vulnerability of a classifier can be reduced to the deviation angle between its weight vector and the weight vector of the nearest centroid classifier. This analysis leads us to identify two types of adversarial examples: those affecting optimal classifiers, which are limited by a fundamental robustness/accuracy trade-off, and those affecting sub-optimal classifiers, resulting from imperfect training procedures or overfitting. We then show that L2 regularization plays an important role in practice, by acting as a balancing mechanism between two objectives: the minimization of the error and the maximization of the adversarial distance over the training set. We finally generalize our considerations to deep neural networks, reinterpreting in particular weight decay and adversarial training as belonging to a same family of output regularizers. If designing models that are robust to small image perturbations remains challenging, we show in the last Chapter of this thesis that state-of-the-art networks can easily be made more vulnerable. Reversing the problem in this way exposes new attack scenarios and, crucially, helps improve our understanding of the adversarial example phenomenon by emphasizing the role played by low variance directions

    Sailbot 2017-2018

    Get PDF
    The goal of this MQP was to build and program a robot capable of competing in the 2018 International Robotic Sailing Competition (IRSC), also known as Sailbot. This project utilized existing research on control and design of autonomous sailboats, and built on lessons learned from the last two years of WPI’s Sailbot entries. The final product of this MQP was a more reliable, easier to control, and more innovative design than last year’s event-winning boat
    • …
    corecore