42,154 research outputs found
Recommended from our members
Memory-Based High-Level Synthesis Optimizations Security Exploration on the Power Side-Channel
High-level synthesis (HLS) allows hardware designers to think algorithmically and not worry about low-level, cycle-by-cycle details. This provides the ability to quickly explore the architectural design space and tradeoffs between resource utilization and performance. Unfortunately, security evaluation is not a standard part of the HLS design flow. In this article, we aim to understand the effects of memory-based HLS optimizations on power side-channel leakage. We use Xilinx Vivado HLS to develop different cryptographic cores, implement them on a Spartan-6 FPGA, and collect power traces. We evaluate the designs with respect to resource utilization, performance, and information leakage through power consumption. We have two important observations and contributions. First, the choice of resource optimization directive results in different levels of side-channel vulnerabilities. Second, the partitioning optimization directive can greatly compromise the hardware cryptographic system through power side-channel leakage due to the deployment of memory control logic. We describe an evaluation procedure for power side-channel leakage and use it to make best-effort recommendations about how to design more secure architectures in the cryptographic domain
Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing
Context: Verification and validation (V&V) activities make up 20 to 50
percent of the total development costs of a software system in practice. Test
automation is proposed to lower these V&V costs but available research only
provides limited empirical data from industrial practice about the maintenance
costs of automated tests and what factors affect these costs. In particular,
these costs and factors are unknown for automated GUI-based testing.
Objective: This paper addresses this lack of knowledge through analysis of
the costs and factors associated with the maintenance of automated GUI-based
tests in industrial practice.
Method: An empirical study at two companies, Siemens and Saab, is reported
where interviews about, and empirical work with, Visual GUI Testing is
performed to acquire data about the technique's maintenance costs and
feasibility.
Results: 13 factors are observed that affect maintenance, e.g. tester
knowledge/experience and test case complexity. Further, statistical analysis
shows that developing new test scripts is costlier than maintenance but also
that frequent maintenance is less costly than infrequent, big bang maintenance.
In addition a cost model, based on previous work, is presented that estimates
the time to positive return on investment (ROI) of test automation compared to
manual testing.
Conclusions: It is concluded that test automation can lower overall software
development costs of a project whilst also having positive effects on software
quality. However, maintenance costs can still be considerable and the less time
a company currently spends on manual testing, the more time is required before
positive, economic, ROI is reached after automation
Designing Software Architectures As a Composition of Specializations of Knowledge Domains
This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience
Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures
We investigate fundamental decisions in the design of instruction set
architectures for linear genetic programs that are used as both model systems
in evolutionary biology and underlying solution representations in evolutionary
computation. We subjected digital organisms with each tested architecture to
seven different computational environments designed to present a range of
evolutionary challenges. Our goal was to engineer a general purpose
architecture that would be effective under a broad range of evolutionary
conditions. We evaluated six different types of architectural features for the
virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more
precisely modify the function of genetic instructions, (2) memory: we provided
an increased number of registers in the virtual CPUs, (3) decoupled sensors and
actuators: we separated input and output operations to enable greater control
over data flow. We also tested a variety of methods to regulate expression: (4)
explicit labels that allow programs to dynamically refer to specific genome
positions, (5) position-relative search instructions, and (6) multiple new flow
control instructions, including conditionals and jumps. Each of these features
also adds complication to the instruction set and risks slowing evolution due
to epistatic interactions. Two features (multiple argument specification and
separated I/O) demonstrated substantial improvements int the majority of test
environments. Some of the remaining tested modifications were detrimental,
thought most exhibit no systematic effects on evolutionary potential,
highlighting the robustness of digital evolution. Combined, these observations
enhance our understanding of how instruction architecture impacts evolutionary
potential, enabling the creation of architectures that support more rapid
evolution of complex solutions to a broad range of challenges
Robustness-Driven Resilience Evaluation of Self-Adaptive Software Systems
An increasingly important requirement for certain classes of software-intensive systems is the ability to self-adapt their structure and behavior at run-time when reacting to changes that may occur to the system, its environment, or its goals. A major challenge related to self-adaptive software systems is the ability to provide assurances of their resilience when facing changes. Since in these systems, the components that act as controllers of a target system incorporate highly complex software, there is the need to analyze the impact that controller failures might have on the services delivered by the system. In this paper, we present a novel approach for evaluating the resilience of self-adaptive software systems by applying robustness testing techniques to the controller to uncover failures that can affect system resilience. The approach for evaluating resilience, which is based on probabilistic model checking, quantifies the probability of satisfaction of system properties when the target system is subject to controller failures. The feasibility of the proposed approach is evaluated in the context of an industrial middleware system used to monitor and manage highly populated networks of devices, which was implemented using the Rainbow framework for architecture-based self-adaptation
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
- …