524 research outputs found

    Application-specific Design and Optimization for Ultra-Low-Power Embedded Systems

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2019. Major: Electrical/Computer Engineering. Advisor: John Sartori. 1 computer file (PDF); xii, 101 pages.The last few decades have seen a tremendous amount of innovation in computer system design to the point where electronic devices have become very inexpensive. This has brought us on the verge of a new paradigm in computing where there will be hundreds of devices in a person’s environment, ranging from mobile phones to smart home devices to wearables to implantables, all interconnected. This paradigm, called the Internet of Things (IoT), brings new challenges in terms of power, cost, and security. For example, power and energy have become critical design constraints that not only affect the lifetime of an ultra-low-power (ULP) system, but also its size and weight. While many conventional techniques exist that are aimed at energy reduction or that improve energy efficiency, they do so at the cost of performance. As such, their impact is limited in circumstances where energy is very constrained or where significant degradation of performance or functionality is unacceptable. Focusing on the opposing demands to increase both energy efficiency and performance simultaneously in a world where Moore’s law scaling is decelerating, one of the underlying themes of this work has been to identify novel insights that enable new pathways to energy efficiency in computing systems while avoiding the conventional tradeoff that simply sacrifices performance and functionality for energy efficiency. To this end, this work proposes a method to analyze the behavior of an application on the gate-level netlist of a processor for all possible inputs using a novel symbolic hardware-software co-analysis methdology. Using this methodology several techniques have been proposed to optimize a given processor-application pair for power, area and security

    NFV Platforms: Taxonomy, Design Choices and Future Challenges

    Get PDF
    Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm. During the last few years, a large number of NFV platforms have been implemented in production environments that typically face critical challenges, including the development, deployment, and management of Virtual Network Functions (VNFs). Nonetheless, just like any complex system, such platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore their design space. In this paper, we present a comprehensive survey on the NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on what features they provide in terms of a typical network function life cycle. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We also envision future challenges for NFV platform design in the incoming 5G era. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. Our work also provides guidelines for implementing new NFV platforms

    Towards mobile cloud computing with single sign-on access

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Journal of Grid Computing. The final authenticated version is available online at: http://dx.doi.org/10.1007/s10723-017-9413-3The low computing power of mobile devices impedes the development of mobile applications with a heavy computing load. Mobile Cloud Computing (MCC) has emerged as the solution to this by connecting mobile devices with the “infinite” computing power of the Cloud. As mobile devices typically communicate over untrusted networks, it becomes necessary to secure the communications to avoid privacy-sensitive data breaches. This paper presents work on implementing MCC applications with secure communications. For that purpose, we built on COMPSs-Mobile, a redesigned implementation of the COMP Superscalar (COMPSs) framework aiming to MCC platorms. COMPSs-Mobile automatically exploits the parallelism inherent in an application and orchestrates its execution on loosely-coupled distributed environment. To avoid a vendor lock-in, this extension leverages on the Generic Security Services Application Program Interface (GSSAPI) (RFC2743) as a generic way to access security services to provide communications with authentication, secrecy and integrity. Besides, GSSAPI allows applications to take profit of more advanced features, such as Federated Identity or Single Sign-On, which the underlying security framework could provide. To validate the practicality of the proposal, we use Kerberos as the security services provider to implement SSO; however, applications do not authenticate themselves and require users to obtain and place the credentials beforehand. To evaluate the performance, we conducted some tests running an application on a smartphone offloading tasks to a private cloud. Our results show that the overhead of securing the communications is acceptable.This work has been supported by the Spanish Government (contracts TIN2012-34557, TIN2015-65316-P and grants BES-2013-067167, EEBB-I-15-09808 of the Research Training Program and SEV-2011-00067 of Severo Ochoa Program), by Generalitat de Catalunya (contract 2014-SGR-1051) and by the European Commission (ASCETiC project, FP7-ICT-2013.1.2 contract 610874). The second author was partially supported by the European Commission's Horizon2020 programme under grant agreement 653965 (AARC).Peer ReviewedPostprint (author's final draft

    Composite Enclaves: Towards Disaggregated Trusted Execution

    Get PDF
    The ever-rising computation demand is forcing the move from the CPU to heterogeneous specialized hardware, which is readily available across modern datacenters through disaggregated infrastructure. On the other hand, trusted execution environments (TEEs), one of the most promising recent developments in hardware security, can only protect code confined in the CPU, limiting TEEs' potential and applicability to a handful of applications. We observe that the TEEs' hardware trusted computing base (TCB) is fixed at design time, which in practice leads to using untrusted software to employ peripherals in TEEs. Based on this observation, we propose \emph{composite enclaves} with a configurable hardware and software TCB, allowing enclaves access to multiple computing and IO resources. Finally, we present two case studies of composite enclaves: i) an FPGA platform based on RISC-V Keystone connected to emulated peripherals and sensors, and ii) a large-scale accelerator. These case studies showcase a flexible but small TCB (2.5 KLoC for IO peripherals and drivers), with a low-performance overhead (only around 220 additional cycles for a context switch), thus demonstrating the feasibility of our approach and showing that it can work with a wide range of specialized hardware

    SecureStreams: A Reactive Middleware Framework for Secure Data Stream Processing

    Full text link
    The growing adoption of distributed data processing frameworks in a wide diversity of application domains challenges end-to-end integration of properties like security, in particular when considering deployments in the context of large-scale clusters or multi-tenant Cloud infrastructures. This paper therefore introduces SecureStreams, a reactive middleware framework to deploy and process secure streams at scale. Its design combines the high-level reactive dataflow programming paradigm with Intel's low-level software guard extensions (SGX) in order to guarantee privacy and integrity of the processed data. The experimental results of SecureStreams are promising: while offering a fluent scripting language based on Lua, our middleware delivers high processing throughput, thus enabling developers to implement secure processing pipelines in just few lines of code.Comment: Barcelona, Spain, June 19-23, 2017, 10 page

    TaintHLS: High-Level Synthesis For Dynamic Information Flow Tracking

    Get PDF
    Dynamic Information Flow Tracking (DIFT) is a technique to track potential security vulnerabilities in software and hardware systems at run time. Untrusted data are marked with tags (tainted), which are propagated through the system and their potential for unsafe use is analyzed to prevent them. DIFT is not supported in heterogeneous systems especially hardware accelerators. Currently, DIFT is manually generated and integrated into the accelerators. This process is error-prone, potentially hurting the process of identifying security violations in heterogeneous systems. We present TAINTHLS, to automatically generate a micro-architecture to support baseline operations and a shadow microarchitecture for intrinsic DIFT support in hardware accelerators while providing variable granularity of taint tags. TaintHLS offers a companion high-level synthesis (HLS) methodology to automatically generate such DIFT-enabled accelerators from a high-level specification. We extended a state-of-the-art HLS tool to generate DIFT-enhanced accelerators and demonstrated the approach on numerous benchmarks. The DIFT-enabled accelerators have negligible performance and no more than 30% hardware overhead

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions
    • …
    corecore