4 research outputs found

    Business intelligence’s self-service tools evaluation

    Get PDF
    The software selection process in the context of a big company is not an easy task. In the Business Intelligence area, this decision is critical, since the resources needed to implement the tool are huge and imply the participation of all organization actors. We propose to adopt the systemic quality model to perform a neutral comparison between four business intelligence self-service tools. To assess the quality, we consider eight characteristics and eighty-two metrics. We built a methodology to evaluate self-service BI tools, adapting the systemic quality model. As an example, we evaluated four tools that were selected from all business intelligence platforms, following a rigorous methodology. Through the assessment, we obtained two tools with the maximum quality level. To obtain the differences between them, we were more restrictive increasing the level of satisfaction. Finally, we got a unique tool with the maximum quality level, while the other one was rejected according to the rules established in the methodology. The methodology works well for this type of software, helping in the detailed analysis and neutral selection of the final software to be used for the implementation.Peer ReviewedPostprint (published version

    The Impact of the User Interface on Simulation Usability and Solution Quality

    Get PDF
    This research outlines a study that was performed to determine the effects of user interface design variations on the usability and solution quality of complex, multivariate discrete-event simulations. Specifically, this study examined four key research questions: what are the user interface considerations for a given simulation model, what are the current best practices in user interface design for simulations, how is usability best evaluated for simulation interfaces, and specifically what are the measured effects of varying levels of usability of interface elements on simulation operations such as data entry and solution analysis. The overall goal of the study was to show the benefit of applied usability practices in simulation design, supported by experimental evidence from testing two alternative simulation user interfaces designed with varying usability. The study employed directed research in usability and simulation design to support design of an experiment that addressed the core problem of interface effects on simulation. In keeping with the study goal of demonstrating usability practices, the experimental procedures were analogous to the development processes recommended in supporting literature for usability-based design lifecycles. Steps included user and task analysis, concept and use modeling, paper prototypes of user interfaces for initial usability assessment, interface development and assessment, and user-based testing of actual interfaces with an actual simulation model. The experimental tests employed two interfaces designed with selected usability variations, each interacting with the same core simulation model. The experimental steps were followed by an analysis of quantitative and qualitative data gathered, including data entry time, interaction errors, solution quality measures, and user acceptance data. The study resulted in mixed support for the hypotheses that improvements in usability of simulation interface elements will improve data entry, solution quality, and overall simulation interactions. Evidence for data entry was mixed, for solution quality was positive to neutral, and for overall usability was very positive. As a secondary benefit, the study demonstrated application of usability-based interface design best practices and processes that could provide guidelines for increasing usability of future discrete-event simulation interface designs. Examination of the study results also provided suggestions for possible future research on the investigation topics
    corecore