42,162 research outputs found
Recommended from our members
Memory-Based High-Level Synthesis Optimizations Security Exploration on the Power Side-Channel
High-level synthesis (HLS) allows hardware designers to think algorithmically and not worry about low-level, cycle-by-cycle details. This provides the ability to quickly explore the architectural design space and tradeoffs between resource utilization and performance. Unfortunately, security evaluation is not a standard part of the HLS design flow. In this article, we aim to understand the effects of memory-based HLS optimizations on power side-channel leakage. We use Xilinx Vivado HLS to develop different cryptographic cores, implement them on a Spartan-6 FPGA, and collect power traces. We evaluate the designs with respect to resource utilization, performance, and information leakage through power consumption. We have two important observations and contributions. First, the choice of resource optimization directive results in different levels of side-channel vulnerabilities. Second, the partitioning optimization directive can greatly compromise the hardware cryptographic system through power side-channel leakage due to the deployment of memory control logic. We describe an evaluation procedure for power side-channel leakage and use it to make best-effort recommendations about how to design more secure architectures in the cryptographic domain
You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems
Visual query systems (VQSs) empower users to interactively search for line
charts with desired visual patterns, typically specified using intuitive
sketch-based interfaces. Despite decades of past work on VQSs, these efforts
have not translated to adoption in practice, possibly because VQSs are largely
evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we
collaborated with experts from three diverse domains---astronomy, genetics, and
material science---via a year-long user-centered design process to develop a
VQS that supports their workflow and analytical needs, and evaluate how VQSs
can be used in practice. Our study results reveal that ad-hoc sketch-only
querying is not as commonly used as prior work suggests, since analysts are
often unable to precisely express their patterns of interest. In addition, we
characterize three essential sensemaking processes supported by our enhanced
VQS. We discover that participants employ all three processes, but in different
proportions, depending on the analytical needs in each domain. Our findings
suggest that all three sensemaking processes must be integrated in order to
make future VQSs useful for a wide range of analytical inquiries.Comment: Accepted for presentation at IEEE VAST 2019, to be held October 20-25
in Vancouver, Canada. Paper will also be published in a special issue of IEEE
Transactions on Visualization and Computer Graphics (TVCG) IEEE VIS
(InfoVis/VAST/SciVis) 2019 ACM 2012 CCS - Human-centered computing,
Visualization, Visualization design and evaluation method
A framework for selecting workflow tools in the context of composite information systems
When an organization faces the need of integrating some workflow-related activities in its information system, it becomes necessary to have at hand some well-defined informational model to be used as a framework for determining the selection criteria onto which the requirements of the organization can be mapped. Some proposals exist that provide such a framework, remarkably the WfMC reference model, but they are designed to be appl icable when workflow tools are selected independently from other software, and departing from a set of well-known requirements. Often this is not the case: workflow facilities are needed as a part of the procurement of a larger, composite information syste m and therefore the general goals of the system have to be analyzed, assigned to its individual components and further detailed. We propose in this paper the MULTSEC method in charge of analyzing the initial goals of the system, determining the types of components that form the system architecture, building quality models for each type and then mapping the goals into detailed requirements which can be measured using quality criteria. We develop in some detail the quality model (compliant with the ISO/IEC 9126-1 quality standard) for the workflow type of tools; we show how the quality model can be used to refine and clarify the requirements in order to guarantee a highly reliable selection result; and we use it to evaluate two particular workflow solutions a- ailable in the market (kept anonymous in the paper). We develop our proposal using a particular selection experience we have recently been involved in, namely the procurement of a document management subsystem to be integrated in an academic data management information system for our university.Peer ReviewedPostprint (author's final draft
A Survey on Evaluation Factors for Business Process Management Technology
Estimating the value of business process management (BPM) technology is a difficult task to accomplish. Computerized business processes have a strong impact on an organization, and BPM projects have a long-term cost amortization.
To systematically analyze BPM technology from an economic-driven perspective, we are currently developing an evaluation framework in the EcoPOST project. In order to empirically validate the relevance of assumed evaluation factors (e.g., process knowledge, business process redesign, end user fears, and communication) we have conducted an online survey among 70 BPM experts from more than 50 industrial and academic organizations. This paper summarizes the results of this survey. Our results help both researchers and practitioners to better understand the evaluation factors that determine the value of BPM technology
A safety analysis approach to clinical workflows : application and evaluation
Clinical workflows are safety critical workflows as they have the potential to cause harm or death to patients. Their safety needs to be considered as early as possible in the development process. Effective safety analysis methods are required to ensure the safety of these high-risk workflows, because errors that may happen through routine workflow could propagate within the workflow to result in harmful failures of the system’s output. This paper shows how to apply an approach for safety analysis of clinic al workflows to analyse the safety of the workflow within a radiology department and evaluates the approach in terms of usability and benefits. The outcomes of using this approach include identification of the root causes of hazardous workflow failures that may put patients’ lives at risk. We show that the approach is applicable to this area of healthcare and is able to present added value through the detailed information on possible failures, of both their causes and effects; therefore, it has the potential to improve the safety of radiology and other clinical workflows
An Assurance Framework for Independent Co-assurance of Safety and Security
Integrated safety and security assurance for complex systems is difficult for
many technical and socio-technical reasons such as mismatched processes,
inadequate information, differing use of language and philosophies, etc.. Many
co-assurance techniques rely on disregarding some of these challenges in order
to present a unified methodology. Even with this simplification, no methodology
has been widely adopted primarily because this approach is unrealistic when met
with the complexity of real-world system development.
This paper presents an alternate approach by providing a Safety-Security
Assurance Framework (SSAF) based on a core set of assurance principles. This is
done so that safety and security can be co-assured independently, as opposed to
unified co-assurance which has been shown to have significant drawbacks. This
also allows for separate processes and expertise from practitioners in each
domain. With this structure, the focus is shifted from simplified unification
to integration through exchanging the correct information at the right time
using synchronisation activities
Anytime Cognition: An information agent for emergency response
Planning under pressure in time-constrained environments while relying on uncertain information is a challenging task. This is particularly true for planning the response during an ongoing disaster in a urban area, be that a natural one, or a deliberate attack on the civilian population. As the various activities pertaining to the emergency response need to be coordinated in response to multiple reports from the disaster site, a user finds itself cognitively overloaded. To address this issue, we designed the Anytime Cognition (ANTICO) concept to assist human users working in time-constrained environments by maintaining a manageable level of cognitive workload over time. Based on the ANTICO concept, we develop an agent framework for proactively managing a user’s changing information requirements by integrating information management techniques with probabilistic plan recognition. In this paper, we describe a prototype emergency response application in the context of a subset of the attacks devised by the American Department of Homeland Security
- …