8,053 research outputs found

    Measuring usability for application software using the quality in use integration measurement model

    Get PDF
    User interfaces of application software are designed to make user interaction as efficient and as simple as possible. Market accessibility of any application software is determined by the usability of its user interfaces. A poorly designed user interface will have little value no matter how powerful the program is. Thus, it is significantly important to measure usability during the system development lifecycle in order to avoid user disappointment. Various methods and standards that help measure usability have been developed. However, these methods define usability inconsistently, which makes software engineers hesitant in implementing these methods or standards. The Quality in Use Integrated Measurement (QUIM) model is a consolidated approach for measuring usability through 10 factors, 26 criteria, and 127 metrics. It decomposes usability into factors, criteria, and metrics, and it is a hierarchical model that helps developers with no or little background of usability metrics. Among 127 metrics of QUIM, essential efficiency (EE) is the most specific metric used to measure the usability of user interfaces through an equation. This study involves a comparative analysis between three case studies that use the QUIM model to measure usability in terms of EE for three case studies: (1) Public University Registration System, (2) Restaurant Menu Ordering System, and (3) ATM system. A comparison is made based on the percentage of EE for each element of the use cases in each use case diagram. The results obtained revealed that the user interface design for Restaurant Menu Ordering System scored the highest percentage of EE, thus proving to be the most user-friendly application software among its counterparts

    Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all

    Get PDF
    This article describes “Obvious”: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics

    Feasibility study of an Integrated Program for Aerospace-vehicle Design (IPAD) system. Volume 6: Implementation schedule, development costs, operational costs, benefit assessment, impact on company organization, spin-off assessment, phase 1, tasks 3 to 8

    Get PDF
    A baseline implementation plan, including alternative implementation approaches for critical software elements and variants to the plan, was developed. The basic philosophy was aimed at: (1) a progressive release of capability for three major computing systems, (2) an end product that was a working tool, (3) giving participation to industry, government agencies, and universities, and (4) emphasizing the development of critical elements of the IPAD framework software. The results of these tasks indicate an IPAD first release capability 45 months after go-ahead, a five year total implementation schedule, and a total developmental cost of 2027 man-months and 1074 computer hours. Several areas of operational cost increases were identified mainly due to the impact of additional equipment needed and additional computer overhead. The benefits of an IPAD system were related mainly to potential savings in engineering man-hours, reduction of design-cycle calendar time, and indirect upgrading of product quality and performance

    An Intelligent Data Mining System to Detect Health Care Fraud

    Get PDF
    The chapter begins with an overview of the types of healthcare fraud. Next, there is a brief discussion of issues with the current fraud detection approaches. The chapter then develops information technology based approaches and illustrates how these technologies can improve current practice. Finally, there is a summary of the major findings and the implications for healthcare practice

    Flattening an object algebra to provide performance

    Get PDF
    Algebraic transformation and optimization techniques have been the method of choice in relational query execution, but applying them in object-oriented (OO) DBMSs is difficult due to the complexity of OO query languages. This paper demonstrates that the problem can be simplified by mapping an OO data model to the binary relational model implemented by Monet, a state-of-the-art database kernel. We present a generic mapping scheme to flatten data models and study the case of straightforward OO model. We show how flattening enabled us to implement a query algebra, using only a very limited set of simple operations. The required primitives and query execution strategies are discussed, and their performance is evaluated on the 1-GByte TPC-D (Transaction-processing Performance Council's Benchmark D), showing that our divide-and-conquer approach yields excellent result

    Tupleware: Redefining Modern Analytics

    Full text link
    There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world---petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to a few terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware's architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems
    corecore