191 research outputs found

    FPGA-Augmented Secure Crash-Consistent Non-Volatile Memory

    Get PDF
    Emerging byte-addressable Non-Volatile Memory (NVM) technology, although promising superior memory density and ultra-low energy consumption, poses unique challenges to achieving persistent data privacy and computing security, both of which are critically important to the embedded and IoT applications. Specifically, to successfully restore NVMs to their working states after unexpected system crashes or power failure, maintaining and recovering all the necessary security-related metadata can severely increase memory traffic, degrade runtime performance, exacerbate write endurance problem, and demand costly hardware changes to off-the-shelf processors. In this thesis, we summarize and expand upon two of our innovative works, ARES and HERMES, to design a new FPGA-assisted processor-transparent security mechanism aiming at efficiently and effectively achieving all three aspects of a security triad—confidentiality, integrity, and recoverability—in modern embedded computing. Given the growing prominence of CPU-FPGA heterogeneous computing architectures, ARES leverages FPGA\u27s hardware reconfigurability to offload performance-critical and security-related functions to the programmable hardware without microprocessors\u27 involvement. In particular, recognizing that the traditional Merkle tree caching scheme cannot fully exploit FPGA\u27s parallelism due to its sequential and recursive function calls, ARES proposed a new Merkle tree cache architecture and a novel Merkle tree scheme which flattened and reorganized the computation in the traditional Merkle tree verification and update processes to fully exploit the parallel cache ports and to fully pipeline time-consuming hashing operations. To further optimize the throughput of BMT operations, HERMES proposed an optimally efficient dataflow architecture by processing multiple outstanding counter requests simultaneously. Specifically, HERMES explored and addressed three technical challenges when exploiting task-level parallelism of BMT and proposed a speculative execution approach with both low latency and high throughput

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Epidemic-Style Information Dissemination in Large-Scale Wireless Networks

    Get PDF
    Steen, M.R. van [Promotor

    Service Abstractions for Scalable Deep Learning Inference at the Edge

    Get PDF
    Deep learning driven intelligent edge has already become a reality, where millions of mobile, wearable, and IoT devices analyze real-time data and transform those into actionable insights on-device. Typical approaches for optimizing deep learning inference mostly focus on accelerating the execution of individual inference tasks, without considering the contextual correlation unique to edge environments and the statistical nature of learning-based computation. Specifically, they treat inference workloads as individual black boxes and apply canonical system optimization techniques, developed over the last few decades, to handle them as yet another type of computation-intensive applications. As a result, deep learning inference on edge devices still face the ever increasing challenges of customization to edge device heterogeneity, fuzzy computation redundancy between inference tasks, and end-to-end deployment at scale. In this thesis, we propose the first framework that automates and scales the end-to-end process of deploying efficient deep learning inference from the cloud to heterogeneous edge devices. The framework consists of a series of service abstractions that handle DNN model tailoring, model indexing and query, and computation reuse for runtime inference respectively. Together, these services bridge the gap between deep learning training and inference, eliminate computation redundancy during inference execution, and further lower the barrier for deep learning algorithm and system co-optimization. To build efficient and scalable services, we take a unique algorithmic approach of harnessing the semantic correlation between the learning-based computation. Rather than viewing individual tasks as isolated black boxes, we optimize them collectively in a white box approach, proposing primitives to formulate the semantics of the deep learning workloads, algorithms to assess their hidden correlation (in terms of the input data, the neural network models, and the deployment trials) and merge common processing steps to minimize redundancy

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Extending Two-Dimensional Knowledge Management System Theory with Organizational Activity Systems\u27 Workflow Dynamics

    Get PDF
    Between 2005 and 2010 and across 48 countries, including the United States, an increasing positive correlation emerged between national intellectual capital and gross domestic product per capita. The problem remains organizations operating with increasingly complex knowledge networks often lose intellectual capital resulting from ineffective knowledge management practices. The purpose of this study was to provide management opportunities to reduce intellectual capital loss. The first research question addressed how an enhanced intelligent, complex, and adaptive system (ICAS) model could clarify management\u27s understanding of organizational knowledge transfer. The second research question addressed how interdisciplinary theory could become more meaningfully infused to enhance management practices of the organization\u27s knowledge ecosystem. The nature of this study was phenomenological to gain deeper understanding of individual experiences related to knowledge flow phenomena. Data were collected from a single historical research dataset containing 11 subject interviews and analyzed using Moustakas\u27 heuristic framework. Original interviews were collected in 2012 during research within a military unit, included in this study based on theme alignment. Organizational, knowledge management, emergent systems, and cognition theories were synthesized to enhance understandings of emergent ICAS forces. Individuals create unique ICAS flow emergent force dynamics in relation to micro- and macro-meso sensemaking and sensegiving. Findings indicated individual knowledge work significantly shapes emergent ICAS flow dynamics. Collectively enhancing knowledge stewardship over time could foster positive social change by improving national welfare

    Harnessing Knowledge, Innovation and Competence in Engineering of Mission Critical Systems

    Get PDF
    This book explores the critical role of acquisition, application, enhancement, and management of knowledge and human competence in the context of the largely digital and data/information dominated modern world. Whilst humanity owes much of its achievements to the distinct capability to learn from observation, analyse data, gain insights, and perceive beyond original realities, the systematic treatment of knowledge as a core capability and driver of success has largely remained the forte of pedagogy. In an increasingly intertwined global community faced with existential challenges and risks, the significance of knowledge creation, innovation, and systematic understanding and treatment of human competence is likely to be humanity's greatest weapon against adversity. This book was conceived to inform the decision makers and practitioners about the best practice pertinent to many disciplines and sectors. The chapters fall into three broad categories to guide the readers to gain insight from generic fundamentals to discipline-specific case studies and of the latest practice in knowledge and competence management

    Harnessing Knowledge, Innovation and Competence in Engineering of Mission Critical Systems

    Get PDF
    This book explores the critical role of acquisition, application, enhancement, and management of knowledge and human competence in the context of the largely digital and data/information dominated modern world. Whilst humanity owes much of its achievements to the distinct capability to learn from observation, analyse data, gain insights, and perceive beyond original realities, the systematic treatment of knowledge as a core capability and driver of success has largely remained the forte of pedagogy. In an increasingly intertwined global community faced with existential challenges and risks, the significance of knowledge creation, innovation, and systematic understanding and treatment of human competence is likely to be humanity's greatest weapon against adversity. This book was conceived to inform the decision makers and practitioners about the best practice pertinent to many disciplines and sectors. The chapters fall into three broad categories to guide the readers to gain insight from generic fundamentals to discipline-specific case studies and of the latest practice in knowledge and competence management
    • …
    corecore