20 research outputs found

    An update on Keccak performance on ARMv7-M

    Get PDF
    This note provides an update on Keccak performance on the ARMv7-M processors. Starting from the XKCP implementation, we have applied architecture-specific optimizations that have yielded a performance gain of up to 21% for the largest permutation instance

    Rigorous engineering for hardware security: Formal modelling and proof in the CHERI design and implementation process

    Get PDF
    The root causes of many security vulnerabilities include a pernicious combination of two problems, often regarded as inescapable aspects of computing. First, the protection mechanisms provided by the mainstream processor architecture and C/C++ language abstractions, dating back to the 1970s and before, provide only coarse-grain virtual-memory-based protection. Second, mainstream system engineering relies almost exclusively on test-and-debug methods, with (at best) prose specifications. These methods have historically sufficed commercially for much of the computer industry, but they fail to prevent large numbers of exploitable bugs, and the security problems that this causes are becoming ever more acute. In this paper we show how more rigorous engineering methods can be applied to the development of a new security-enhanced processor architecture, with its accompanying hardware implementation and software stack. We use formal models of the complete instruction-set architecture (ISA) at the heart of the design and engineering process, both in lightweight ways that support and improve normal engineering practice -- as documentation, in emulators used as a test oracle for hardware and for running software, and for test generation -- and for formal verification. We formalise key intended security properties of the design, and establish that these hold with mechanised proof. This is for the same complete ISA models (complete enough to boot operating systems), without idealisation. We do this for CHERI, an architecture with \emph{hardware capabilities} that supports fine-grained memory protection and scalable secure compartmentalisation, while offering a smooth adoption path for existing software. CHERI is a maturing research architecture, developed since 2010, with work now underway on an Arm industrial prototype to explore its possible adoption in mass-market commercial processors. The rigorous engineering work described here has been an integral part of its development to date, enabling more rapid and confident experimentation, and boosting confidence in the design.This work was supported by EPSRC programme grant EP/K008528/1 (REMS: Rigorous Engineering for Mainstream Systems). This work was supported by a Gates studentship (Nienhuis). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 789108, ELVER). This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contracts FA8750-10-C-0237 (CTSRD), HR0011-18-C-0016 (ECATS), and FA8650-18-C-7809 (CIFV)

    An Industrial Data Analysis and Supervision Framework for Predictive Manufacturing Systems

    Get PDF
    Due to the advancements in the Information and Communication Technologies field in the modern interconnected world, the manufacturing industry is becoming a more and more data rich environment, with large volumes of data being generated on a daily basis, thus presenting a new set of opportunities to be explored towards improving the efficiency and quality of production processes. This can be done through the development of the so called Predictive Manufacturing Systems. These systems aim to improve manufacturing processes through a combination of concepts such as Cyber-Physical Production Systems, Machine Learning and real-time Data Analytics in order to predict future states and events in production. This can be used in a wide array of applications, including predictive maintenance policies, improving quality control through the early detection of faults and defects or optimize energy consumption, to name a few. Therefore, the research efforts presented in this document focus on the design and development of a generic framework to guide the implementation of predictive manufacturing systems through a set of common requirements and components. This approach aims to enable manufacturers to extract, analyse, interpret and transform their data into actionable knowledge that can be leveraged into a business advantage. To this end a list of goals, functional and non-functional requirements is defined for these systems based on a thorough literature review and empirical knowledge. Subsequently the Intelligent Data Analysis and Real-Time Supervision (IDARTS) framework is proposed, along with a detailed description of each of its main components. Finally, a pilot implementation is presented for each of this components, followed by the demonstration of the proposed framework in three different scenarios including several use cases in varied real-world industrial areas. In this way the proposed work aims to provide a common foundation for the full realization of Predictive Manufacturing Systems

    Optimized Lattice Basis Reduction In Dimension 2, and Fast Schnorr and EdDSA Signature Verification

    Get PDF
    We present an optimization of Lagrange\u27s algorithm for lattice basis reduction in dimension 2. The optimized algorithm is proven to be correct and to always terminate with quadratic complexity; it uses more iterations on average than Lagrange\u27s algorithm, but each iteration is much simpler to implement, and faster. The achieved speed is such that it makes application of the speed-up on ECDSA and EC Schnorr signatures described by Antipa et al worthwhile, even for very fast curves such as Ed25519. We applied this technique to signature verification in Curve9767, and reduced verification time by 30 to 33% on both small (ARM Cortex M0+ and M4) and large (Intel Coffee Lake with AVX2) architectures

    Mustang Daily, September 30, 1985

    Get PDF
    Student newspaper of California Polytechnic State University, San Luis Obispo, CA.https://digitalcommons.calpoly.edu/studentnewspaper/4484/thumbnail.jp

    Learning from videos with deep convolutional LSTM networks

    Get PDF
    Many methods for learning from video sequences involve temporally processing 2D CNN features from the individual frames or directly utilizing 3D convolutions within high-performing 2D CNN architectures. The focus typically remains on how to incorporate the temporal processing within an already stable spatial architecture. This research explores the use of convolution LSTMs to simultaneously learn spatial- and temporal-information in videos. A deep network of convolutional LSTMs allows the model to access the entire range of temporal information at all spatial scales of the data. This work first constructs an MNIST-based video dataset with parameters controlling relevant facets of common video-related tasks: classification, ordering, and speed estimation. Models trained on this dataset are shown to differ in key ways depending on the task and their use of 2D convolutions, 3D convolutions, or convolutional LSTMs. An empirical analysis indicates a complex, interdependent relationship between the spatial and temporal dimensions with design choices having a large impact on a network's ability to learn the appropriate spatiotemporal features. In addition, experiments involving convolution LSTMs for action recognition and lipreading demonstrate the model is capable of selectively choosing which spatiotemporal scales are most relevant for a particular dataset. The proposed deep architecture also holds promise in other applications where spatiotemporal features play a vital role without having to specifically cater the design of the network for the particular spatiotemporal features existent within the problem. Our model has comparable performance with the current state of the art achieving 83.4% on the Lip Reading in the Wild (LRW) dataset. Additional experiments indicate convolutional LSTMs may be particularly data hungry considering the large performance increases when fine-tuning on LRW after pretraining on larger datasets like LRS2 (85.2%) and LRS3-TED (87.1%). However, a sensitivity analysis providing insight on the relevant spatiotemporal temporal features allows certain convolutional LSTM layers to be replaced with 2D convolutions decreasing computational cost without performance degradation indicating their usefulness in accelerating the architecture design process when approaching new problems

    PROLEAD_SW - Probing-Based Software Leakage Detection for ARM Binaries

    Get PDF
    A decisive contribution to the all-embracing protection of cryptographic software, especially on embedded devices, is the protection against SCA attacks. Masking countermeasures can usually be integrated into the software during the design phase. In theory, this should provide reliable protection against such physical attacks. However, the correct application of masking is a non-trivial task that often causes even experts to make mistakes. In addition to human-caused errors, micro-architectural CPU effects can lead even a seemingly theoretically correct implementation to fail to satisfy the desired level of security in practice. This originates from different components of the underlying CPU which complicates the tracing of leakage back to a particular source and hence avoids making general and device-independent statements about its security. PROLEAD has recently been presented at CHES 2022 and has originally been developed as a simulation-based tool to evaluate masked hardware designs. In this work, we adapt PROLEAD for the evaluation of masked software, and enable the transfer of the already known benefits of PROLEAD into the software world. These include (1) evaluation of larger designs compared to the state of the art, e.g. a full AES masked implementation, and (2) formal verification under our new generic leakage model for CPUs. Concretely, we formalize leakages, observed across different CPU architectures, into a generic abstraction model that includes all these leakages and is therefore independent of a specific CPU design. Our resulting tool PROLEAD_SW allows to provide a formal statement on the security based on the derived generic model. As a concrete result, using PROLEAD_SW we evaluated the security of several publicly available masked software implementations in our new generic leakage model and reveal multiple vulnerabilities

    Towards Secure and Trustworthy IoT Systems

    Get PDF
    The boom of the Internet of Things (IoT) brings great convenience to the society by connecting the physical world to the cyber world, but it also attracts mischievous hackers for benefits. Therefore, understanding potential attacks aiming at IoT systems and devising new protection mechanisms are of great significance to maintain the security and privacy of the IoT ecosystem. In this dissertation, we first demonstrate potential threats against IoT networks and their severe consequences via analyzing a real-world air quality monitoring system. By exploiting the discovered flaws, we can impersonate any victim sensor device and polluting its data with fabricated data. It is a great challenge to fight against runtime software attacks targeting IoT devices based on microcontrollers (MCUs) due to the heterogeneity and constrained computational resources of MCUs. An emerging hardware-based solution is TrustZone-M, which isolates the trusted execution environment from the vulnerable rich execution environment. Though TrustZone-M provides the platform for implementing various protection mechanisms, programming TrustZone-M may introduce a new attack surface. We explore the feasibility of launching five exploits in the context of TrustZone-M and validate these attacks using SAM L11, a Microchip MCU with TrustZone-M enabled. We then propose a security framework for IoT devices using TrustZone-M enabled MCUs, in which device security is protected in five dimensions. The security framework is implemented and evaluated with a full-fledged secure and trustworthy air quality monitoring device using SAM L11 as its MCU. Based on TrustZone-M, a function-based ASLR (fASLR) scheme is designed for runtime software security of IoT devices. fASLR is capable of trapping and modifying control flow upon a function call and randomizing the callee function before its execution. Evaluation results show that fASLR achieves high entropy with low overheads

    Composite Modeling based on Distributed Graph Transformation and the Eclipse Modeling Framework

    Get PDF
    Model-driven development (MDD) has become a promising trend in software engineering for a number of reasons. Models as the key artifacts help the developers to abstract from irrelevant details, focus on important aspects of the underlying domain, and thus master complexity. As software systems grow, models may grow as well and finally become possibly too large to be developed and maintained in a comprehensible way. In traditional software development, the complexity of software systems is tackled by dividing the system into smaller cohesive parts, so-called components, and let distributed teams work on each concurrently. The question arises how this strategy can be applied to model-driven development. The overall aim of this thesis is to develop a formalized modularization concept to enable the structured and largely independent development of interrelated models in larger teams. To this end, this thesis proposes component models with explicit export and import interfaces where exports declare what is provided while imports declare what it needed. Then, composite model can be connected by connecting their compatible export and import interfaces yielding so-called composite models. Suitable to composite models, a transformation approach is developed which allows to describe changes over the whole composition structure. From the practical point of view, this concept especially targets models based on the Eclipse Modeling Framework (EMF). In the modeling community, EMF has evolved to a very popular framework which provides modeling and code generation facilities for Java applications based on structured data models. Since graphs are a natural way to represent the underlying structure of visual models, the formalization is based on graph transformation. Incorporated concepts according to distribution heavily rely on distributed graph transformation introduced by Taentzer. Typed graphs with inheritance and containment structures are well suited to describe the essentials of EMF models. However, they also induce a number of constraints like acyclic inheritance and containment which have to be taken into account. The category-theoretical foundation in this thesis allows for the precise definition of consistent composite graph transformations satisfying all inheritance and containment conditions. The composite modeling approach is shown to be coherent with the development of tool support for composite EMF models and composite EMF model transformation
    corecore