1,151 research outputs found
The Performance Cost of Security
Historically, performance has been the most important feature when optimizing computer hardware. Modern processors are so highly optimized that every cycle of computation time matters. However, this practice of optimizing for performance at all costs has been called into question by new microarchitectural attacks, e.g. Meltdown and Spectre. Microarchitectural attacks exploit the effects of microarchitectural components or optimizations in order to leak data to an attacker. These attacks have caused processor manufacturers to introduce performance impacting mitigations in both software and silicon.
To investigate the performance impact of the various mitigations, a test suite of forty-seven different tests was created. This suite was run on a series of virtual machines that tested both Ubuntu 16 and Ubuntu 18. These tests investigated the performance change across version updates and the performance impact of CPU core number vs. default microarchitectural mitigations. The testing proved that the performance impact of the microarchitectural mitigations is non-trivial, as the percent difference in performance can be as high as 200%
Adaptive On-the-Fly Changes in Distributed Processing Pipelines
Distributed data processing systems have become the standard means for big data analytics. These systems are based on processing pipelines where operations on data are performed in a chain of consecutive steps. Normally, the operations performed by these pipelines are set at design time, and any changes to their functionality require the applications to be restarted. This is not always acceptable, for example, when we cannot afford downtime or when a long-running calculation would lose significant progress. The introduction of variation points to distributed processing pipelines allows for on-the-fly updating of individual analysis steps. In this paper, we extend such basic variation point functionality to provide fully automated reconfiguration of the processing steps within a running pipeline through an automated planner. We have enabled pipeline modeling through constraints. Based on these constraints, we not only ensure that configurations are compatible with type but also verify that expected pipeline functionality is achieved. Furthermore, automating the reconfiguration process simplifies its use, in turn allowing users with less development experience to make changes. The system can automatically generate and validate pipeline configurations that achieve a specified goal, selecting from operation definitions available at planning time. It then automatically integrates these configurations into the running pipeline. We verify the system through the testing of a proof-of-concept implementation. The proof of concept also shows promising results when reconfiguration is performed frequently
Real-Time Lossless Compression of SoC Trace Data
Nowadays, with the increasing complexity of System-on-Chip (SoC), traditional debugging approaches are not enough in multi-core architecture systems. Hardware tracing becomes necessary for performance analysis in these systems. The problem is that the size of collected trace data through hardware-based tracing techniques is usually extremely large due to the increasing complexity of System-on-Chips. Hence on-chip trace compression performed in hardware is needed to reduce the amount of transferred or stored data. In this dissertation, the feasibility of different types of lossless data compression algorithms in hardware implementation are investigated and examined. A lossless data compression algorithm LZ77 is selected, analyzed, and optimized to Nexus traces data. In order to meet the hardware cost and compression performances requirements for the real-time compression, an optimized LZ77 compression algorithm is proposed based on the characteristics of Nexus trace data. This thesis presents a hardware implementation of LZ77 encoder described in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Test results demonstrate that the compression speed can achieve16 bits/clock cycle and the average compression ratio is 1.35 for the minimal hardware cost case, which is a suitable trade-off between the hardware cost and the compression performances effectively
Recommended from our members
RUN-TIME ANALYSIS AND SECURITY OF MULTI-LANGUAGE SYSTEMS
The contemporary software development landscape has witnessed a widespread integration of diverse programming languages, leveraging the specific advantages of each, such as the efficiency of C and the programmability of Python. This trend finds notable applications in prominent domains, including the Android operating system and advanced machine learning frameworks like PyTorch. However, adopting this multi-language approach has ushered in aseries of great challenges for developers, necessitating the identification of robust solutions to tackle potential security vulnerabilities.Traditional techniques such as program analysis and fuzzing, initially designed for single-language software, face limitations in effectively uncovering vulnerabilities in multi-language systems. Program analysis grapples with challenges in comprehending the intricate control and data flows across diverse languages, often resulting in incomplete vulnerability detection. Conversely, greybox fuzzing encounters difficulties adapting to the nuances of various languages, leading to incomplete code coverage and complications in reproducing identified vulnerabilities. The intricacies within runtime systems supporting multilingual software exacerbate the security clearance predicament, as these systems are often constructed using multiple languages. This complexity adds an additional layer of difficulty for conventional security techniques, emphasizing the need for more adaptive and comprehensive approachestailored to the unique challenges posed by the multifaceted nature of multi-language systems.Within the scope of my dissertation, I endeavored to tackle the intricate challenges posed by vulnerabilities in multi-language software through a comprehensive and multifaceted approach. This approach entailed conducting extensive empirical investigations into vulnerability susceptibility, facilitating the development of dynamic cross-language information flow analysis. Recognizing the pivotal significance of comprehensive test input coverage, I devisedan integrated greybox fuzzing methodology. This innovative approach integrates sensitivity analysis and comprehensive whole-system coverage measurements, significantly enhancing the efficiency of the fuzzing process and vulnerability identification. Furthermore, I focused on fortifying runtime security by proposing a novel two-level collaborative fuzzing framework tailored explicitly for Python language runtime. This contribution was pivotal in reinforcing the software system’s foundational safeguards, ensuring a robust defense mechanism against potential security threats
Neurofly 2008 abstracts : the 12th European Drosophila neurobiology conference 6-10 September 2008 Wuerzburg, Germany
This volume consists of a collection of conference abstracts
Recommended from our members
Inline and Sideline Approaches for Low-cost Memory Safety in C
System languages such as C or C++ are widely used for their high performance, however the allowance of arbitrary pointer arithmetic and type cast introduces a risk of memory corruptions. These memory errors cause unexpected termination of programs, or even worse, attackers can exploit them to alter the behavior of programs or leak crucial data.
Despite advances in memory safety solutions, high and unpredictable overhead remains a major challenge. Accepting that it is extremely difficult to achieve complete memory safety with the performance level suitable for production deployment, researchers attempt to strike a balance between performance, detection coverage, interoperability, precision, and detection timing. Some properties are much more desirable, e.g. the interoperability with pre-compiled libraries. Comparatively less critical properties are sacrificed for performance, for example, tolerating longer detection delay or narrowing down detection coverage by performing approximate or probabilistic checking or detecting only certain errors. Modern solutions compete for performance.
The performance matrix of memory safety solutions have two major assessment criteria – run-time and memory overheads. Researchers trade-off and balance performance metrics depending on its purpose or placement. Many of them tolerate the increase in memory use for better speed, since memory safety enforcement is more desirable for troubleshooting or testing during development, where a memory resource is not the main issue. Run-time overhead, considered more critical, is impacted by cache misses, dynamic instructions, DRAM row activations, branch predictions and other factors.
This research proposes, implements, and evaluates MIU: Memory Integrity Utilities containing three solutions – MemPatrol, FRAMER and spaceMiu. MIU suggests new techniques for practical deployment of memory safety by exploiting free resources with the following focuses: (1) achieving memory safety with overhead < 1% by using concurrency and trading off prompt detection and coverage; but yet providing eventual detection by a monitor isolation design of an in-register monitor process and the use of AES instructions (2) complete memory safety with near-zero false negatives focusing on eliminating overhead, that hardware support cannot resolve, by using a new tagged-pointer representation utilising the top unused bits of a pointer.Research Foundation of Kore
Building the tree of life: Reconstructing the evolution of a recent and megadiverse branch (Calyptrates : Diptera)
Ph.DDOCTOR OF PHILOSOPH
- …