308 research outputs found

    Towards a Trustworthy Thin Terminal for Securing Enterprise Networks

    Get PDF
    Organizations have many employees that lack the technical knowledge to securely operate their machines. These users may open malicious email attachments/links or install unverified software such as P2P programs. These actions introduce significant risk to an organization\u27s network since they allow attackers to exploit the trust and access given to a client machine. However, system administrators currently lack the control of client machines needed to prevent these security risks. A possible solution to address this issue lies in attestation. With respect to computer science, attestation is the ability of a machine to prove its current state. This capability can be used by client machines to remotely attest to their state, which can be used by other machines in the network when making trust decisions. Previous research in this area has focused on the use of a static root of trust (RoT), requiring the use of a chain of trust over the entire software stack. We would argue this approach is limited in feasibility, because it requires an understanding and evaluation of the all the previous states of a machine. With the use of late launch, a dynamic root of trust introduced in the Trusted Platform Module (TPM) v1.2 specification, the required chain of trust is drastically shortened, minimizing the previous states of a machine that must be evaluated. This reduced chain of trust may allow a dynamic RoT to address the limitations of a static RoT. We are implementing a client terminal service that utilizes late launch to attest to its execution. Further, the minimal functional requirements of the service facilitate strong software verification. The goal in designing this service is not to increase the security of the network, but rather to push the functionality, and therefore the security risks and responsibilities, of client machines to the network€™s servers. In doing so, we create a platform that can more easily be administered by those individuals best equipped to do so with the expectation that this will lead to better security practices. Through the use of late launch and remote attestation in our terminal service, the system administrators have a strong guarantee the clients connecting to their system are secure and can therefore focus their efforts on securing the server architecture. This effectively addresses our motivating problem as it forces user actions to occur under the control of system administrators

    Monitoring Of Remote Hydrocarbon Wells Using Azure Internet Of Things

    Get PDF
    Remote monitoring of hydrocarbon wells is a tedious and meticulously thought out task performed to create a cyber-physical bridge between the asset and the owner. There are many systems and techniques on the market that offer this solution but due to their lack of interoperability and/or decentralized architecture they begin to fall apart when remote assets become farther away from the client. This results in extreme latency and thus poor decision making. Microsoft\u27s Azure IoT Edge was the focus of this writing. Coupled with off-the-shelf hardware, Azure\u27s IoT Edge services were integrated with an existing unit simulating a remote hydrocarbon well. This combination successfully established a semi-autonomous IIoT Edge device that can monitor, process, store, and transfer data locally on the remote device itself. These capabilities were performed utilizing an edge computing architecture that drastically reduced infrastructure and pushed intelligence and responsibility to the source of the data. This application of Azure IoT Edge laid a foundation from which a plethora of solutions can be built, enhancing the intelligence capability of this asset. This study demonstrates edge computing\u27s ability to mitigate latency loops, reduce network stress, and handle intermittent connectivity. Further experimentation and analysis will have to be performed at a larger scale to determine if the resources implemented will suffice for production level operations

    Developing Application in Anticipating DDoS Attacks on Server Computer Machines

    Get PDF
    The use of server computer machines in companies is primarily a web hosting server that is very easy to experience threats, especially external security threats such as attempts to infiltrate, hacking, viruses, and other malicious attacks. Having a secure server is indispensable for working online and especially if involved in business-related network transactions. The Server's realization to be safe from threats is to protect the server machine's security on the hardware and software side and pay attention to network security that goes to the server machine. Generally, firewall applications on router devices have configuration limitations in securing the network, namely non-integrated applications. In other words, it is necessary to manage the perfect firewall configuration to anticipate Distributed Daniel attacks of Service (DDoS) attacks. Therefore, this study aims to integrate existing firewall applications for router devices into an integrated program to secure the network. The methodology used is the Network Development Life Cycle (NDLC). The research results on this developed application program can overcome DDoS attacks without setting up a firewall on the router device and can automatically monitor DDoS attack activities from outside the Server. Securing servers from DDoS attacks without setting up a firewall on the router device and automating the monitoring of DDoS attack activity from outside the Server are the novelties of this study that have not been available in previous studies

    Qduino: a cyber-physical programming platform for multicore Systems-on-Chip

    Full text link
    Emerging multicore Systems-on-Chip are enabling new cyber-physical applications such as autonomous drones, driverless cars and smart manufacturing using web-connected 3D printers. Common to those applications is a communicating task pipeline, to acquire and process sensor data and produce outputs that control actuators. As a result, these applications usually have timing requirements for both individual tasks and task pipelines formed for sensor data processing and actuation. Current cyber-physical programming platforms, such as Arduino and embedded Linux with the POSIX interface do not allow application developers to specify those timing requirements. Moreover, none of them provide the programming interface to schedule tasks and map them to processor cores, while managing I/O in a predictable manner, on multicore hardware platforms. Hence, this thesis presents the Qduino programming platform. Qduino adopts the simplicity of the Arduino API, with additional support for real-time multithreaded sketches on multicore architectures. Qduino allows application developers to specify timing properties of individual tasks as well as task pipelines at the design stage. To this end, we propose a mathematical framework to derive each task’s budget and period from the specified end-to-end timing requirements. The second part of the thesis is motivated by the observation that at the center of these pipelines are tasks that typically require complex software support, such as sensor data fusion or image processing algorithms. These features are usually developed by many man-year engineering efforts and thus commonly seen on General-Purpose Operating Systems (GPOS). Therefore, in order to support modern, intelligent cyber-physical applications, we enhance the Qduino platform’s extensibility by taking advantage of the Quest-V virtualized partitioning kernel. The platform’s usability is demonstrated by building a novel web-connected 3D printer and a prototypical autonomous drone framework in Qduino

    TrustZone based attestation in secure runtime verification for embedded systems

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaARM TrustZone é um “Ambiente de Execução Confiável” disponibilizado em processadores da ARM, que equipam grande parte dos sistemas embebidos. Este mecanismo permite assegurar que componentes críticos de uma aplicação executem num ambiente que garante a confidencialidade dos dados e integridade do código, mesmo que componentes maliciosos estejam instalados no mesmo dispositivo. Neste projecto pretende-se tirar partido do TrustZone no contexto de uma framework segura de monitorização em tempo real de sistemas embebidos. Especificamente, pretende-se recorrer a components como o ARM Trusted Firmware, responsável pelo processo de secure boot em sistemas ARM, para desenvolver um mecanismo de atestação que providencie garantias de computação segura a entidades remotas.ARM TrustZone is a security extension present on ARM processors that enables the development of hardware based Trusted Execution Environments (TEEs). This mechanism allows the critical components of an application to execute in an environment that guarantees data confidentiality and code integrity, even when a malicious agent is installed on the device. This projects aims to harness TrustZone in the context of a secure runtime verification framework for embedded devices. Specifically, it aims to harness existing components, namely ARM Trusted Firmware, responsible for the secure boot process of ARM devices, to implement an attestation mechanism that provides proof of secure computation to remote parties.This work has been partially supported by the Portuguese Foundation for Science and Technology (FCT), project REASSURE (PTDC/EEI-COM/28550/2017), co-financed by the European Regional Development Fund (FEDER), through the North Regional Operational Program (NORTE 2020)

    Design and analysis of digital communication within an SoC-based control system for trapped-ion quantum computing

    Full text link
    Electronic control systems used for quantum computing have become increasingly complex as multiple qubit technologies employ larger numbers of qubits with higher fidelity targets. Whereas the control systems for different technologies share some similarities, parameters like pulse duration, throughput, real-time feedback, and latency requirements vary widely depending on the qubit type. In this paper, we evaluate the performance of modern System-on-Chip (SoC) architectures in meeting the control demands associated with performing quantum gates on trapped-ion qubits, particularly focusing on communication within the SoC. A principal focus of this paper is the data transfer latency and throughput of several high-speed on-chip mechanisms on Xilinx multi-processor SoCs, including those that utilize direct memory access (DMA). They are measured and evaluated to determine an upper bound on the time required to reconfigure a gate parameter. Worst-case and average-case bandwidth requirements for a custom gate sequencer core are compared with the experimental results. The lowest-variability, highest-throughput data-transfer mechanism is DMA between the real-time processing unit (RPU) and the PL, where bandwidths up to 19.2 GB/s are possible. For context, this enables reconfiguration of qubit gates in less than 2\mics\!, comparable to the fastest gate time. Though this paper focuses on trapped-ion control systems, the gate abstraction scheme and measured communication rates are applicable to a broad range of quantum computing technologies

    MSRR: Leveraging dynamic measurement for establishing trust in remote attestation

    Get PDF
    Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Runtime measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote appraisal system. However, existing runtime measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual runtime measurement. We describe the MSRR (MeaSeReR) Measurement Suite, a system of tools designed with the primary goal of reducing the high degree of manual effort required to produce measurement solutions at a per application basis. The MSRR suite prototypes a novel general-purpose measurement system, the MSRR Measurement System, that is agnostic of the target application. Furthermore, we describe a robust high-level measurement policy language, MSRR-PL, that can be used to write per application policies for the MSRR Measurer. Finally, we provide a tool to automatically generate MSRR-PL policies for target applications by leveraging state of the art static analysis tools. In this work, we show how the MSRR suite can be used to significantly reduce the time and effort spent on designing measurers anew for each application. We describe MSRR's robust querying language, which allows the appraisal system to accurately specify the what, when, and how to measure. We describe the capabilities and the limitations of our measurement policy generation tool. We evaluate MSRR's overhead and demonstrate its functionality by employing real-world case studies. We show that MSRR has an acceptable overhead on a host of applications with various measurement workloads

    Assessing performance overhead of Virtual Machine Introspection and its suitability for malware analysis

    Get PDF
    Virtual Machine Introspection is the process of introspecting guest VM’s memory and reconstructing the state of the guest operating system. Due to its isolation, stealth and full visibility of the monitored target, VMI lends itself well for security monitoring and malware analysis. The topics covered in this thesis include operating system and hypervisor concepts, the semantic gap issue, VMI techniques and implementations, applying VMI for malware analysis, and analysis of the performance overhead. The behaviour and magnitude of the performance overhead associated with doing virtual machine introspection is analysed with five different empirical test cases. The intention of the tests is to estimate the costs of a single trapped event, determine the feasibility of various monitoring sensors from usability and stealth perspective, and analyse the behaviour of performance overhead. Various VMI-based tools were considered for the measurement, but DRAKVUF was chosen as it is the most advanced tool available. The test cases go as follows. The chosen load is first executed without any monitoring to determine the baseline execution time. Then a DRAKVUF monitoring plugin is turned on and the load is executed again. After both measurements have been made, the difference between the two execution times is the time spent executing monitoring code. The execution overhead is then determined by calculating the difference between the two execution times and dividing it by the baseline execution time. The disc consumption and execution overhead of a sensor, which captures removed files is small enough to be deployed as a monitoring solution. The performance overhead of system call monitoring sensor is dependant on the number of issued system calls. Loads which issue large numbers of system calls cause high performance overhead. The performance overhead of such loads can be limited by monitoring a subset of all system calls
    • …
    corecore