67 research outputs found

    Perspicuity and Granularity in Refinement

    Get PDF
    This paper reconsiders refinements which introduce actions on the concrete level which were not present at the abstract level. It draws a distinction between concrete actions which are "perspicuous" at the abstract level, and changes of granularity of actions between different levels of abstraction. The main contribution of this paper is in exploring the relation between these different methods of "action refinement", and the basic refinement relation that is used. In particular, it shows how the "refining skip" method is incompatible with failures-based refinement relations, and consequently some decisions in designing Event-B refinement are entangled.Comment: In Proceedings Refine 2011, arXiv:1106.348

    Validation of Coding Schemes and Coding Workbench

    Get PDF
    This report presents methodology and results of the validation of the MATE best practice coding schemes and the MATE workbench. The validation phase covered the period from September 1999 to February 2000, and involved project partners as well as Advisory Panel members who kindly volunteered to act as external evaluators. The first part of the report focuses on the evaluation of the theoretical work in MATE while the second part concentrates on the workbench . In both cases, a questionnaire has been used as a core tool to obtain feedback from evaluators. A major probem has been the short time available for evaluation which has implied that less feedbach than originally expected could be obtained . Evaluation of MATE results will continue after the end of the project

    Study on heuristic usability evaluation for mobile applications

    Full text link
    Usability guidelines are a useful tool for the developers to improve interaction with systems. It includes knowledge of different disciplines related to usability and provides solutions and best practices to achieve the objectives of usability. Heuristic evaluation is one of the methods most widely used to evaluate and user interfaces. The objective of this study is to enrich the process of heuristic evaluation with the design guidelines focusing it on the evaluation of applications for mobile devices. As well as generate a homogeneous classification of guidelines content, in order to help that from design and development process, be included solutions and good practices provided by the guidelines. In order to achieve the objectives of this work, it is provides a method for generating heuristics for mobile applications, with which four applications were evaluated, and a web tool has also been developed that allows access to the content of the guidelines using the homogeneous classification of guidelines content. The results showed the ease and utility of performing heuristic evaluations using a set of heuristics focused on mobile applications

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    Understanding, Explaining, and Deriving Refinement

    Get PDF
    Much of what drove us in over twenty years of research in refinement, starting with Z in particular, was the desire to understand where refinement rules came from. The relational model of refinement provided a solid starting point which allowed the derivation of Z refinement rules. Not only did this explain and verify the existing rules - more importantly, it also allowed alternative derivations for different and generalised notions of refinement. In this chapter, we briefly describe the context of our early efforts in this area and Susan Stepney's role in this, before moving on to the motivation and exploration of a recently developed primitive model of refinement: concrete state machines with anonymous transitions

    A Lifecycle for User Experience Management in Agile Development

    Get PDF
    Context. Agile methods are increasingly being used by companies, to develop digital products and services faster and more effectively. Today's users not only demand products that are easy to use, but also products with a high User Experience (UX). Agile methods themselves do not directly support the development of products with a good user experience. In combination with UX activities, it is potentially possible to develop a good UX. Objective. The objective of this PhD thesis is to develop a UX Lifecycle, to manage the user experience in the context of Agile methods. With this UX Lifecycle, Agile teams can manage the UX of their product, in a targeted way. Method. We developed the UX Lifecycle step by step, according to the Design Science Research Methodology. First, we conducted a Structured Literature Review (SLR) to determine the state of the art of UX management. The result of the SLR concludes in a GAP analysis. On this basis, we derived requirements for UX management. These requirements were then implemented in the UX Lifecycle. In developing the UX Lifecycle, we developed additional methods (UX Poker, UEQ KPI, and IPA), to be used when deploying the UX Lifecycle. Each of these methods has been validated in studies, with a total of 497 respondents from three countries (Germany, England, and Spain). Finally, we validated the UX Lifecycle, as a whole, with a Delphi study, with a total of 24 international experts from four countries (Germany, Argentina, Spain, and Poland). Results. The iterative UX Lifecycle (Figure 1) consists of five steps: Initial Step 0 ‘Preparation’, Step 1 ‘UX Poker’ (before development/Estimated UX), Step 2 ‘Evaluate Prototype’ (during development/Probable UX), Step 3 ‘Evaluate Product Increment’ (after development/Implemented UX), and a subsequent Step 4 ‘UX Retrospective’. With its five steps, the UX Lifecycle provides the structure for continuously measuring and evaluating the UX, in the various phases. This makes it possible to develop the UX in a targeted manner, and to check it permanently. In addition, we have developed the UX Poker method. With this method, the User Experience can be determined by the Agile team, in the early phases of development. The evaluation study of UX Poker has indicated that UX Poker can be used to estimate the UX for user stories. In addition, UX Poker inspires a discussion about UX, that results in a common understanding of the UX of the product. To interpret the results from the evaluation of a prototype and product increment, we developed or derived the User Experience Questionnaire KPI and Importance-Performance Analysis. In a first study, we were able to successfully apply the two methods and, in combination with established UEQ methods, derive recommendations for action, regarding the improvement of the UX. This would not have been possible without their use. The results of the Delphi study, to validate the UX Lifecycle, reached consensus after two rounds. The results of the evaluation and the comments lead to the conclusion, that the UX Lifecycle has a sufficiently positive effect on UX management. Conclusion. The goal-oriented focus on UX factors and their improvement, as propagated in the UX Lifecycle, are a good way of implementing UX management in a goal-oriented manner. By comparing the results from UX Poker, the evaluation of the prototype, and product increment, the Agile team can learn more about developing a better UX, within a UX retrospective. The UX Lifecycle will have a positive effect on UX management. The use of individual components of the UX Lifecycle, such as UX Poker or Importance-Performance Analysis, already helps an Agile team to improve the user experience. But only in combination with the UX Lifecycle and the individual methods and approaches presented in this PhD thesis, is a management of the user experience in a targeted manner possible, in our view. This was the initial idea of this PhD thesis, which we are convinced we could implement

    Approximate model composition for explanation generation

    Get PDF
    This thesis presents a framework for the formulation of knowledge models to sup¬ port the generation of explanations for engineering systems that are represented by the resulting models. Such models are automatically assembled from instantiated generic component descriptions, known as modelfragments. The model fragments are of suffi¬ cient detail that generally satisfies the requirements of information content as identified by the user asking for explanations. Through a combination of fuzzy logic based evidence preparation, which exploits the history of prior user preferences, and an approximate reasoning inference engine, with a Bayesian evidence propagation mechanism, different uncertainty sources can be han¬ dled. Model fragments, each representing structural or behavioural aspects of a com¬ ponent of the domain system of interest, are organised in a library. Those fragments that represent the same domain system component, albeit with different representation detail, form parts of the same assumption class in the library. Selected fragments are assembled to form an overall system model, prior to extraction of any textual infor¬ mation upon which to base the explanations. The thesis proposes and examines the techniques that support the fragment selection mechanism and the assembly of these fragments into models. In particular, a Bayesian network-based model fragment selection mechanism is de¬ scribed that forms the core of the work. The network structure is manually determined prior to any inference, based on schematic information regarding the connectivity of the components present in the domain system under consideration. The elicitation of network probabilities, on the other hand is completely automated using probability elicitation heuristics. These heuristics aim to provide the information required to select fragments which are maximally compatible with the given evidence of the fragments preferred by the user. Given such initial evidence, an existing evidence propagation algorithm is employed. The preparation of the evidence for the selection of certain fragments, based on user preference, is performed by a fuzzy reasoning evidence fab¬ rication engine. This engine uses a set of fuzzy rules and standard fuzzy reasoning mechanisms, attempting to guess the information needs of the user and suggesting the selection of fragments of sufficient detail to satisfy such needs. Once the evidence is propagated, a single fragment is selected for each of the domain system compo¬ nents and hence, the final model of the entire system is constructed. Finally, a highly configurable XML-based mechanism is employed to extract explanation content from the newly formulated model and to structure the explanatory sentences for the final explanation that will be communicated to the user. The framework is illustratively applied to a number of domain systems and is compared qualitatively to existing compositional modelling methodologies. A further empirical assessment of the performance of the evidence propagation algorithm is carried out to determine its performance limits. Performance is measured against the number of frag¬ ments that represent each of the components of a large domain system, and the amount of connectivity permitted in the Bayesian network between the nodes that stand for the selection or rejection of these fragments. Based on this assessment recommenda¬ tions are made as to how the framework may be optimised to cope with real world applications

    Parallel Hierarchies: Interactive Visualization of Multidimensional Hierarchical Aggregates

    Get PDF
    Exploring multi-dimensional hierarchical data is a long-standing problem present in a wide range of fields such as bioinformatics, software systems, social sciences and business intelligence. While each hierarchical dimension within these data structures can be explored in isolation, critical information lies in the relationships between dimensions. Existing approaches can either simultaneously visualize multiple non-hierarchical dimensions, or only one or two hierarchical dimensions. Yet, the challenge of visualizing multi-dimensional hierarchical data remains open. To address this problem, we developed a novel data visualization approach -- Parallel Hierarchies -- that we demonstrate on a real-life SAP SE product called SAP Product Lifecycle Costing. The starting point of the research is a thorough customer-driven requirement engineering phase including an iterative design process. To avoid restricting ourselves to a domain-specific solution, we abstract the data and tasks gathered from users, and demonstrate the approach generality by applying Parallel Hierarchies to datasets from bioinformatics and social sciences. Moreover, we report on a qualitative user study conducted in an industrial scenario with 15 experts from 9 different companies. As a result of this co-innovation experience, several SAP customers requested a product feature out of our solution. Moreover, Parallel Hierarchies integration as a standard diagram type into SAP Analytics Cloud platform is in progress. This thesis further introduces different uncertainty representation methods applicable to Parallel Hierarchies and in general to flow diagrams. We also present a visual comparison taxonomy for time-series of hierarchically structured data with one or multiple dimensions. Moreover, we propose several visual solutions for comparing hierarchies employing flow diagrams. Finally, after presenting two application examples of Parallel Hierarchies on industrial datasets, we detail two validation methods to examine the effectiveness of the visualization solution. Particularly, we introduce a novel design validation table to assess the perceptual aspects of eight different visualization solutions including Parallel Hierarchies.:1 Introduction 1.1 Motivation and Problem Statement 1.2 Research Goals 1.3 Outline and Contributions 2 Foundations of Visualization 2.1 Information Visualization 2.1.1 Terms and Definition 2.1.2 What: Data Structures 2.1.3 Why: Visualization Tasks 2.1.4 How: Visualization Techniques 2.1.5 How: Interaction Techniques 2.2 Visual Perception 2.2.1 Visual Variables 2.2.2 Attributes of Preattentive and Attentive Processing 2.2.3 Gestalt Principles 2.3 Flow Diagrams 2.3.1 Classifications of Flow Diagrams 2.3.2 Main Visual Features 2.4 Summary 3 Related Work 3.1 Cross-tabulating Hierarchical Categories 3.1.1 Visualizing Categorical Aggregates of Item Sets 3.1.2 Hierarchical Visualization of Categorical Aggregates 3.1.3 Visualizing Item Sets and Their Hierarchical Properties 3.1.4 Hierarchical Visualization of Categorical Set Aggregates 3.2 Uncertainty Visualization 3.2.1 Uncertainty Taxonomies 3.2.2 Uncertainty in Flow Diagrams 3.3 Time-Series Data Visualization 3.3.1 Time & Data 3.3.2 User Tasks 3.3.3 Visual Representation 3.4 Summary ii Contents 4 Requirement Engineering Phase 4.1 Introduction 4.2 Environment 4.2.1 The Product 4.2.2 The Customers and Development Methodology 4.2.3 Lessons Learned 4.3 Visualization Requirements for Product Costing 4.3.1 Current Visualization Practice 4.3.2 Visualization Tasks 4.3.3 Data Structure and Size 4.3.4 Early Visualization Prototypes 4.3.5 Challenges and Lessons Learned 4.4 Data and Task Abstraction 4.4.1 Data Abstraction 4.4.2 Task Abstraction 4.5 Summary and Outlook 5 Parallel Hierarchies 5.1 Introduction 5.2 The Parallel Hierarchies Technique 5.2.1 The Individual Axis: Showing Hierarchical Categories 5.2.2 Two Interlinked Axes: Showing Pairwise Frequencies 5.2.3 Multiple Linked Axes: Propagating Frequencies 5.2.4 Fine-tuning Parallel Hierarchies through Reordering 5.3 Design Choices 5.4 Applying Parallel Hierarchies 5.4.1 US Census Data 5.4.2 Yeast Gene Ontology Annotations 5.5 Evaluation 5.5.1 Setup of the Evaluation 5.5.2 Procedure of the Evaluation 5.5.3 Results from the Evaluation 5.5.4 Validity of the Evaluation 5.6 Summary and Outlook 6 Visualizing Uncertainty in Flow Diagrams 6.1 Introduction 6.2 Uncertainty in Product Costing 6.2.1 Background 6.2.2 Main Causes of Bad Quality in Costing Data 6.3 Visualization Concepts 6.4 Uncertainty Visualization using Ribbons 6.4.1 Selected Visualization Techniques 6.4.2 Study Design and Procedure 6.4.3 Results 6.4.4 Discussion 6.5 Revised Visualization Approach using Ribbons 6.5.1 Application to Sankey Diagram 6.5.2 Application to Parallel Sets 6.5.3 Application to Parallel Hierarchies 6.6 Uncertainty Visualization using Nodes 6.6.1 Visual Design of Nodes 6.6.2 Expert Evaluation 6.7 Summary and Outlook 7 Visual Comparison Task 7.1 Introduction 7.2 Comparing Two One-dimensional Time Steps 7.2.1 Problem Statement 7.2.2 Visualization Design 7.3 Comparing Two N-dimensional Time Steps 7.4 Comparing Several One-dimensional Time Steps 7.5 Summary and Outlook 8 Parallel Hierarchies in Practice 8.1 Application to Plausibility Check Task 8.1.1 Plausibility Check Process 8.1.2 Visual Exploration of Machine Learning Results 8.2 Integration into SAP Analytics Cloud 8.2.1 SAP Analytics Cloud 8.2.2 Ocean to Table Project 8.3 Summary and Outlook 9 Validation 9.1 Introduction 9.2 Nested Model Validation Approach 9.3 Perceptual Validation of Visualization Techniques 9.3.1 Design Validation Table 9.3.2 Discussion 9.4 Summary and Outlook 10 Conclusion and Outlook 10.1 Summary of Findings 10.2 Discussion 10.3 Outlook A Questionnaires of the Evaluation B Survey of the Quality of Product Costing Data C Questionnaire of Current Practice Bibliograph

    A methodology for hardware-software codesign

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 150-156).Special purpose hardware is vital to embedded systems as it can simultaneously improve performance while reducing power consumption. The integration of special purpose hardware into applications running in software is difficult for a number of reasons. Some of the difficulty is due to the difference between the models used to program hardware and software, but great effort is also required to coordinate the simultaneous execution of the application running on the microprocessor with the accelerated kernel(s) running in hardware. To further compound the problem, current design methodologies for embedded applications require an early determination of the design partitioning which allows hardware and software to be developed simultaneously, each adhering to a rigid interface contract. This approach is problematic because often a good hardware-software decomposition is not known until deep into the design process. Fixed interfaces and the burden of reimplementation prevent the migration of functionality motivated by repartitioning. This thesis presents a two-part solution to the integration of special purpose hardware into applications running in software. The first part addresses the problem of generating infrastructure for hardware-accelerated applications. We present a methodology in which the application is represented as a dataflow graph and the computation at each node is specified for execution either in software or as specialized hardware using the programmer's language of choice. An interface compiler as been implemented which takes as input the FIFO edges of the graph and generates code to connect all the different parts of the program, including those which communicate across the hardware/software boundary. This methodology, which we demonstrate on an FPGA platform, enables programmers to effectively exploit hardware acceleration without ever leaving the application space. The second part of this thesis presents an implementation of the Bluespec Codesign Language (BCL) to address the difficulty of experimenting with hardware/software partitioning alternatives. Based on guarded atomic actions, BCL can be used to specify both hardware and low-level software. Based on Bluespec SystemVerilog (BSV) for which a hardware compiler by Bluespec Inc. is commercially available, BCL has been augmented with extensions to support more efficient software generation. In BCL, the programmer specifies the entire design, including the partitioning, allowing the compiler to synthesize efficient software and hardware, along with transactors for communication between the partitions. The benefit of using a single language to express the entire design is that a programmer can easily experiment with many different hardware/software decompositions without needing to re-write the application code. Used together, the BCL and interface compilers represent a comprehensive solution to the task of integrating specialized hardware into an application.by Myron King.Ph.D
    • …
    corecore