14,136 research outputs found

    A sense of self for power side-channel signatures: instruction set disassembly and integrity monitoring of a microcontroller system

    Get PDF
    Cyber-attacks are on the rise, costing billions of dollars in damages, response, and investment annually. Critical United States National Security and Department of Defense weapons systems are no exception, however, the stakes go well beyond financial. Dependence upon a global supply chain without sufficient insight or control poses a significant issue. Additionally, systems are often designed with a presumption of trust, despite their microelectronics and software-foundations being inherently untrustworthy. Achieving cybersecurity requires coordinated and holistic action across disciplines commensurate with the specific systems, mission, and threat. This dissertation explores an existing gap in low-level cybersecurity while proposing a side-channel based security monitor to support attack detection and the establishment of trusted foundations for critical embedded systems. Background on side-channel origins, the more typical side-channel attacks, and microarchitectural exploits are described. A survey of related side-channel efforts is provided through side-channel organizing principles. The organizing principles enable comparison of dissimilar works across the side-channel spectrum. We find that the maturity of existing side-channel security monitors is insufficient, as key transition to practice considerations are often not accounted for or resolved. We then document the development, maturation, and assessment of a power side-channel disassembler, Time-series Side-channel Disassembler (TSD), and extend it for use as a security monitor, TSD-Integrity Monitor (TSD-IM). We also introduce a prototype microcontroller power side-channel collection fixture, with benefits to experimentation and transition to practice. TSD-IM is finally applied to a notional Point of Sale (PoS) application for proof of concept evaluation. We find that TSD and TSD-IM advance state of the art for side-channel disassembly and security monitoring in open literature. In addition to our TSD and TSD-IM research on microcontroller signals, we explore beneficial side-channel measurement abstractions as well as the characterization of the underlying microelectronic circuits through Impulse Signal Analysis (ISA). While some positive results were obtained, we find that further research in these areas is necessary. Although the need for a non-invasive, on-demand microelectronics-integrity capability is supported, other methods may provide suitable near-term alternatives to ISA

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    5G RF Spectrum-based Cryptographic Pseudo Random Number Generation for IoT Security

    Get PDF
    This thesis presents a novel approach for generating truly random num- bers in 5G wireless communication systems using the radio frequency (RF) spectrum. The proposed method leverages variations in the RF spectrum to create entropy, which is then used to generate truly random numbers. This approach is based on channel state information (CSI) measured at the receiver in 5G systems and utilize the variability of the CSI to extract entropy for random number generation. The proposed method has several advantages over traditional random number generators, including the use of a natural source of entropy in 5G wireless communication systems, min- imal hardware and computational resource requirements, and a high level of security due to the use of physical characteristics of the wireless chan- nel that are difficult for attackers to predict or manipulate. Simulation re- sults demonstrate that the proposed method generates high-entropy random numbers, passes statistical randomness tests, and outperforms traditional random number generators regarding energy consumption and computa- tional complexity. This approach has the potential to improve the security of cryptographic protocols in 5G networks

    GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP

    Full text link
    Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for the first two runs of the Large Hadron Collider (LHC). In the early 2010's, the projections were that simulation demands would scale linearly with luminosity increase, compensated only partially by an increase of computing resources. The extension of fast simulation approaches to more use cases, covering a larger fraction of the simulation budget, is only part of the solution due to intrinsic precision limitations. The remainder corresponds to speeding-up the simulation software by several factors, which is out of reach using simple optimizations on the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport codes in order to make them benefit from fine-grained parallelism features such as vectorization, but also from increased code and data locality. This paper presents extensively the results and achievements of this R&D, as well as the conclusions and lessons learnt from the beta prototype.Comment: 34 pages, 26 figures, 24 table

    Efficient Resource Management Mechanism for 802.16 Wireless Networks Based on Weighted Fair Queuing

    Get PDF
    Wireless Networking continues on its path of being one of the most commonly used means of communication. The evolution of this technology has taken place through the design of various protocols. Some common wireless protocols are the WLAN, 802.16 or WiMAX, and the emerging 802.20, which specializes in high speed vehicular networks, taking the concept from 802.16 to higher levels of performance. As with any large network, congestion becomes an important issue. Congestion gains importance as more hosts join a wireless network. In most cases, congestion is caused by the lack of an efficient mechanism to deal with exponential increases in host devices. This can effectively lead to very huge bottlenecks in the network causing slow sluggish performance, which may eventually reduce the speed of the network. With continuous advancement being the trend in this technology, the proposal of an efficient scheme for wireless resource allocation is an important solution to the problem of congestion. The primary area of focus will be the emerging standard for wireless networks, the 802.16 or “WiMAX”. This project, attempts to propose a mechanism for an effective resource management mechanism between subscriber stations and the corresponding base station

    Toward Sensor-Based Random Number Generation for Mobile and IoT Devices

    Get PDF
    The importance of random number generators (RNGs) to various computing applications is well understood. To ensure a quality level of output, high-entropy sources should be utilized as input. However, the algorithms used have not yet fully evolved to utilize newer technology. Even the Android pseudo RNG (APRNG) merely builds atop the Linux RNG to produce random numbers. This paper presents an exploratory study into methods of generating random numbers on sensor-equipped mobile and Internet of Things devices. We first perform a data collection study across 37 Android devices to determine two things-how much random data is consumed by modern devices, and which sensors are capable of producing sufficiently random data. We use the results of our analysis to create an experimental framework called SensoRNG, which serves as a prototype to test the efficacy of a sensor-based RNG. SensoRNG employs collection of data from on-board sensors and combines them via a lightweight mixing algorithm to produce random numbers. We evaluate SensoRNG with the National Institute of Standards and Technology statistical testing suite and demonstrate that a sensor-based RNG can provide high quality random numbers with only little additional overhead

    On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling

    Get PDF
    A multi-fidelity surrogate model for highly nonlinear multiscale problems is proposed. It is based on the introduction of two different surrogate models and an adaptive on-the-fly switching. The two concurrent surrogates are built incrementally starting from a moderate set of evaluations of the full order model. Therefore, a reduced order model (ROM) is generated. Using a hybrid ROM-preconditioned FE solver, additional effective stress-strain data is simulated while the number of samples is kept to a moderate level by using a dedicated and physics-guided sampling technique. Machine learning (ML) is subsequently used to build the second surrogate by means of artificial neural networks (ANN). Different ANN architectures are explored and the features used as inputs of the ANN are fine tuned in order to improve the overall quality of the ML model. Additional ANN surrogates for the stress errors are generated. Therefore, conservative design guidelines for error surrogates are presented by adapting the loss functions of the ANN training in pure regression or pure classification settings. The error surrogates can be used as quality indicators in order to adaptively select the appropriate -- i.e. efficient yet accurate -- surrogate. Two strategies for the on-the-fly switching are investigated and a practicable and robust algorithm is proposed that eliminates relevant technical difficulties attributed to model switching. The provided algorithms and ANN design guidelines can easily be adopted for different problem settings and, thereby, they enable generalization of the used machine learning techniques for a wide range of applications. The resulting hybrid surrogate is employed in challenging multilevel FE simulations for a three-phase composite with pseudo-plastic micro-constituents. Numerical examples highlight the performance of the proposed approach
    corecore