378 research outputs found

    Reconfigurable Security: Edge Computing-based Framework for IoT

    Full text link
    In various scenarios, achieving security between IoT devices is challenging since the devices may have different dedicated communication standards, resource constraints as well as various applications. In this article, we first provide requirements and existing solutions for IoT security. We then introduce a new reconfigurable security framework based on edge computing, which utilizes a near-user edge device, i.e., security agent, to simplify key management and offload the computational costs of security algorithms at IoT devices. This framework is designed to overcome the challenges including high computation costs, low flexibility in key management, and low compatibility in deploying new security algorithms in IoT, especially when adopting advanced cryptographic primitives. We also provide the design principles of the reconfigurable security framework, the exemplary security protocols for anonymous authentication and secure data access control, and the performance analysis in terms of feasibility and usability. The reconfigurable security framework paves a new way to strength IoT security by edge computing.Comment: under submission to possible journal publication

    Identification of the Beagle 2 lander on Mars

    Get PDF
    The 2003 Beagle 2 Mars lander has been identified in Isidis Planitia at 90.43° E, 11.53° N, close to the predicted target of 90.50° E, 11.53° N. Beagle 2 was an exobiology lander designed to look for isotopic and compositional signs of life on Mars, as part of the European Space Agency Mars Express (MEX) mission. The 2004 recalculation of the original landing ellipse from a 3-sigma major axis from 174 km to 57 km, and the acquisition of Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) imagery at 30 cm per pixel across the target region, led to the initial identification of the lander in 2014. Following this, more HiRISE images, giving a total of 15, including red and blue-green colours, were obtained over the area of interest and searched, which allowed sub-pixel imaging using super high-resolution techniques. The size (approx. 1.5 m), distinctive multilobed shape, high reflectivity relative to the local terrain, specular reflections, and location close to the centre of the planned landing ellipse led to the identification of the Beagle 2 lander. The shape of the imaged lander, although to some extent masked by the specular reflections in the various images, is consistent with deployment of the lander lid and then some or all solar panels. Failure to fully deploy the panels-which may have been caused by damage during landing-would have prohibited communication between the lander and MEX and commencement of science operations. This implies that the main part of the entry, descent and landing sequence, the ejection from MEX, atmospheric entry and parachute deployment, and landing worked as planned with perhaps only the final full panel deployment failing

    Classification, testing and optimization of intrusion detection systems

    Get PDF
    Modem network security products vary greatly in their underlying technology and architecture. Since the introduction of intrusion detection decades ago, intrusion detection technologies have continued to evolve rapidly. This rapid change has led to the introduction of a wealth of security devices, technologies and algorithms that perform functions originally associated with intrusion detection systems. This thesis offers an analysis of intrusion detection technologies, proposing a new classification system for intrusion detection systems. Working closely with the development of a new intrusion detection product, this thesis introduces a method of testing related technologies in a production environment by outlining and executing a series of denial of service and scan and probe attacks. Based on the findings of these experiments, a series of enhancements to the core intrusion detection product is introduced to improve its capabilities and adapt to modem needs of security products

    High-performance network traffic processing systems using commodity hardware

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-36784-7_1The Internet has opened new avenues for information ac- cessing and sharing in a variety of media formats. Such popularity has resulted in an increase of the amount of resources consumed in backbone links, whose capacities have witnessed numerous upgrades to cope with the ever-increasing demand for bandwidth. Consequently, network tra c processing at today's data transmission rates is a very demanding task, which has been traditionally accomplished by means of specialized hard- ware tailored to speci c tasks. However, such approaches lack either of exibility or extensibility|or both. As an alternative, the research com- munity has pointed to the utilization of commodity hardware, which may provide exible and extensible cost-aware solutions, ergo entailing large reductions of the operational and capital expenditure investments. In this chapter, we provide a survey-like introduction to high-performance network tra c processing using commodity hardware. We present the required background to understand the di erent solutions proposed in the literature to achieve high-speed lossless packet capture, which are reviewed and compared

    A compiler level intermediate representation based binary analysis system and its applications

    Get PDF
    Analyzing and optimizing programs from their executables has received a lot of attention recently in the research community. There has been a tremendous amount of activity in executable-level research targeting varied applications such as security vulnerability analysis, untrusted code analysis, malware analysis, program testing, and binary optimizations. The vision of this dissertation is to advance the field of static analysis of executables and bridge the gap between source-level analysis and executable analysis. The main thesis of this work is scalable static binary rewriting and analysis using compiler-level intermediate representation without relying on the presence of metadata information such as debug or symbolic information. In spite of a significant overlap in the overall goals of several source-code methods and executables-level techniques, several sophisticated transformations that are well-understood and implemented in source-level infrastructures have yet to become available in executable frameworks. It is a well known fact that a standalone executable without any meta data is less amenable to analysis than the source code. Nonetheless, we believe that one of the prime reasons behind the limitations of existing executable frameworks is that current executable frameworks define their own intermediate representations (IR) which are significantly more constrained than an IR used in a compiler. Intermediate representations used in existing binary frameworks lack high level features like abstract stack, variables, and symbols and are even machine dependent in some cases. This severely limits the application of well-understood compiler transformations to executables and necessitates new research to make them applicable. In the first part of this dissertation, we present techniques to convert the binaries to the same high-level intermediate representation that compilers use. We propose methods to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable to an abstract stack. The proposed methods are practical since they do not employ symbolic, relocation, or debug information which are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called \emph{SecondWrite} that uses LLVM as the IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, including several real world programs. In the next part of this work, we demonstrate that several well-known source-level analysis frameworks such as symbolic analysis have limited effectiveness in the executable domain since executables typically lack higher-level semantics such as program variables. The IR should have a precise memory abstraction for an analysis to effectively reason about memory operations. Our first work of recovering a compiler-level representation addresses this limitation by recovering several higher-level semantics information from executables. In the next part of this work, we propose methods to handle the scenarios when such semantics cannot be recovered. First, we propose a hybrid static-dynamic mechanism for recovering a precise and correct memory model in executables in presence of executable-specific artifacts such as indirect control transfers. Next, the enhanced memory model is employed to define a novel symbolic analysis framework for executables that can perform the same types of program analysis as source-level tools. Frameworks hitherto fail to simultaneously maintain the properties of correct representation and precise memory model and ignore memory-allocated variables while defining symbolic analysis mechanisms. We exemplify that our framework is robust, efficient and it significantly improves the performance of various traditional analyses like global value numbering, alias analysis and dependence analysis for executables. Finally, the underlying representation and analysis framework is employed for two separate applications. First, the framework is extended to define a novel static analysis framework, \emph{DemandFlow}, for identifying information flow security violations in program executables. Unlike existing static vulnerability detection methods for executables, DemandFlow analyzes memory locations in addition to symbols, thus improving the precision of the analysis. DemandFlow proposes a novel demand-driven mechanism to identify and precisely analyze only those program locations and memory accesses which are relevant to a vulnerability, thus enhancing scalability. DemandFlow uncovers six previously undiscovered format string and directory traversal vulnerabilities in popular ftp and internet relay chat clients. Next, the framework is extended to implement a platform-specific optimization for embedded processors. Several embedded systems provide the facility of locking one or more lines in the cache. We devise the first method in literature that employs instruction cache locking as a mechanism for improving the average-case run-time of general embedded applications. We demonstrate that the optimal solution for instruction cache locking can be obtained in polynomial time. Since our scheme is implemented inside a binary framework, it successfully addresses the portability concern by enabling the implementation of cache locking at the time of deployment when all the details of the memory hierarchy are available

    Confidential remote computing

    Get PDF
    Since their market launch in late 2015, trusted hardware enclaves have revolutionised the computing world with data-in-use protections. Their security features of confidentiality, integrity and attestation attract many application developers to move their valuable assets, such as cryptographic keys, password managers, private data, secret algorithms and mission-critical operations, into them. The potential security issues have not been well explored yet, and the quick integration movement into these widely available hardware technologies has created emerging problems. Today system and application designers utilise enclave-based protections for critical assets; however, the gap within the area of hardware-software co-design causes these applications to fail to benefit from strong hardware features. This research presents hands-on experiences, techniques and models on the correct utilisation of hardware enclaves in real-world systems. We begin with designing a generic template for scalable many-party applications processing private data with mutually agreed public code. Many-party applications can vary from smart-grid systems to electronic voting infrastructures and block-chain smart contracts to internet-of-things deployments. Next, our research extensively examines private algorithms executing inside trusted hardware enclaves. We present practical use cases for protecting intellectual property, valuable algorithms and business or game logic besides private data. Our mechanisms allow querying private algorithms on rental services, querying private data with privacy filters such as differential privacy budgets, and integrity-protected computing power as a service. These experiences lead us to consolidate the disparate research into a unified Confidential Remote Computing (CRC) model. CRC consists of three main areas: the trusted hardware, the software development and the attestation domains. It resolves the ambiguity of trust in relevant fields and provides a systematic view of the field from past to future. Lastly, we examine the questions and misconceptions about malicious software profiting from security features offered by the hardware. The more popular idea of confidential computing focuses on servers managed by major technology vendors and cloud infrastructures. In contrast, CRC focuses on practices in a more decentralised setting for end-users, system designers and developers

    Java for Cost Effective Embedded Real-Time Software

    Get PDF

    Affordable techniques for dependable microprocessor design

    Get PDF
    As high computing power is available at an affordable cost, we rely on microprocessor-based systems for much greater variety of applications. This dependence indicates that a processor failure could have more diverse impacts on our daily lives. Therefore, dependability is becoming an increasingly important quality measure of microprocessors.;Temporary hardware malfunctions caused by unstable environmental conditions can lead the processor to an incorrect state. This is referred to as a transient error or soft error. Studies have shown that soft errors are the major source of system failures. This dissertation characterizes the soft error behavior on microprocessors and presents new microarchitectural approaches that can realize high dependability with low overhead.;Our fault injection studies using RISC processors have demonstrated that different functional blocks of the processor have distinct susceptibilities to soft errors. The error susceptibility information must be reflected in devising fault tolerance schemes for cost-sensitive applications. Considering the common use of on-chip caches in modern processors, we investigated area-efficient protection schemes for memory arrays. The idea of caching redundant information was exploited to optimize resource utilization for increased dependability. We also developed a mechanism to verify the integrity of data transfer from lower level memories to the primary caches. The results of this study show that by exploiting bus idle cycles and the information redundancy, an almost complete check for the initial memory data transfer is possible without incurring a performance penalty.;For protecting the processor\u27s control logic, which usually remains unprotected, we propose a low-cost reliability enhancement strategy. We classified control logic signals into static and dynamic control depending on their changeability, and applied various techniques including commit-time checking, signature caching, component-level duplication, and control flow monitoring. Our schemes can achieve more than 99% coverage with a very small hardware addition. Finally, a virtual duplex architecture for superscalar processors is presented. In this system-level approach, the processor pipeline is backed up by a partially replicated pipeline. The replication-based checker minimizes the design and verification overheads. For a large-scale superscalar processor, the proposed architecture can bring 61.4% reduction in die area while sustaining the maximum performance

    A Coordination Model and Framework for Developing Distributed Mobile Applications

    Get PDF
    How to coordinate multiple devices to work together as a single application is one of the most important challenges for building a distributed mobile application. Mobile devices play important roles in daily life and resolving this challenge is vital. Many coordination models have already been developed to support the implementation of parallel applications, and LIME (Linda In a Mobile Environment) is the most popular member. This thesis evaluates and analyzes the advantages and disadvantages of the LIME, and its predecessor Linda coordination model. This thesis proposes a new coordination model that focuses on overcoming the drawbacks of LIME and Linda. The new coordination model leverages the features of consistent hashing in order to obtain better coordination performance. Additionally, this new coordination model utilizes the idea of replica mechanism to guarantee data integrity. A cross-platform coordination framework, based on the new coordination model, is presented by this thesis in order to facilitate and simplify the development of distributed mobile applications. This framework aims to be robust and high-performance, supporting not only powerful devices such as smartphones but also constrained devices, which includes IoT sensors. The framework utilizes many advanced concepts and technologies such as CoAP protocol, P2P networking, Wi-Fi Direct, and Bluetooth Low Energy to achieve the goals of high-performance and fault-tolerance. Six experiments have been done to test the coordination model and framework from di erent aspects including bandwidth, throughput, packages per second, hit rate, and data distribution. Results of the experiments demonstrate that the proposed coordination model and framework meet the requirements of high-performance and fault-tolerance
    corecore