18 research outputs found

    Recommended Practices for Spreadsheet Testing

    Full text link
    This paper presents the authors recommended practices for spreadsheet testing. Documented spreadsheet error rates are unacceptable in corporations today. Although improvements are needed throughout the systems development life cycle, credible improvement programs must include comprehensive testing. Several forms of testing are possible, but logic inspection is recommended for module testing. Logic inspection appears to be feasible for spreadsheet developers to do, and logic inspection appears to be safe and effective.Comment: 12 Pages, Extensive Reference

    Automating Spreadsheet Discovery & Risk Assessment

    Full text link
    There have been many articles and mishaps published about the risks of uncontrolled spreadsheets in today's business environment, including non-compliance, operational risk, errors, and fraud all leading to significant loss events. Spreadsheets fall into the realm of end user developed applications and are often absent the proper safeguards and controls an IT organization would enforce for enterprise applications. There is also an overall lack of software programming discipline enforced in how spreadsheets are developed. However, before an organization can apply proper controls and discipline to critical spreadsheets, an accurate and living inventory of spreadsheets across the enterprise must be created, and all critical spreadsheets must be identified. As such, this paper proposes an automated approach to the initial stages of the spreadsheet management lifecycle - discovery, inventory and risk assessment. Without the use of technology, these phases are often treated as a one-off project. By leveraging technology, they become a sustainable business process.Comment: 7 Pages, 6 Colour Figure

    Facing the Facts

    Full text link
    Human error research on overconfidence supports the benefits of early visibility of defects and disciplined development. If risk to the enterprise is to be reduced, individuals need to become aware of the reality of the quality of their work. Several cycles of inspection and defect removal are inevitable. Software Quality Management measurements of defect density and removal efficiency are applicable. Research of actual spreadsheet error rates shows data consistent with other software depending on the extent to which the work product was reviewed before inspection. The paper argues that the payback for an investment in early review time is justified by the saving in project delay and expensive errors in use. 'If debugging is the process of removing bugs, then programming must be the process of putting them in' - Anon.Comment: 7 pages, 1 table, 1 figur

    Establishing and Measuring Standard Spreadsheet Practices for End-Users

    Full text link
    This paper offers a brief review of cognitive verbs typically used in the literature to describe standard spreadsheet practices. The verbs identified are then categorised in terms of Bloom's Taxonomy of Hierarchical Levels, and then rated and arranged to distinguish some of their qualities and characteristics. Some measurement items are then evaluated to see how well computerised test question items validate or reinforce training or certification. The paper considers how establishing standard practices in spreadsheet training and certification can help reduce some of the risks associated with spreadsheets, and help promote productivity.Comment: 15 Pages, 5 Tables, 9 Colour Figure

    Defending the future: An MSc module in End User Computing Risk Management

    Full text link
    This paper describes the rationale, curriculum and subject matter of a new MSc module being taught on an MSc Finance and Information Management course at the University of Wales Institute in Cardiff. Academic research on spreadsheet risks now has some penetration in academic literature and there is a growing body of knowledge on the subjects of spreadsheet error, human factors, spreadsheet engineering, "best practice", spreadsheet risk management and various techniques used to mitigate spreadsheet errors. This new MSc module in End User Computing Risk Management is an attempt to pull all of this research and practitioner experience together to arm the next generation of finance spreadsheet champions with the relevant knowledge, techniques and critical perspective on an emerging discipline.Comment: 9 Pages, 1 Tabl

    An Insight into Spreadsheet User Behaviour through an Analysis of EuSpRIG Website Statistics

    Full text link
    The European Spreadsheet Risks Interest Group (EuSpRIG) has maintained a website almost since its inception in 2000. We present here longitudinal and cross-sectional statistics from the website log in order to shed some light upon end-user activity in the EuSpRIG domain.Comment: 10 Pages, 5 Tables, 2 Colour Figures, ISBN 978-0-9566256-9-

    Error Estimation in Large Spreadsheets using Bayesian Statistics

    Full text link
    Spreadsheets are ubiquitous in business with the financial sector particularly heavily reliant on the technology. It is known that the level of spreadsheet error can be high and that it is often necessary to review spreadsheets based on a structured methodology which includes a cell by cell examination of the spreadsheet. This paper outlines the early research that has been carried out into the use of Bayesian Statistical methods to estimate the level of error in large spreadsheets during cell be cell examination based on expert knowledge and partial spreadsheet test data. The estimate can aid in the decision as to the quality of the spreadsheet and the necessity to conduct further testing or not.Comment: 12 Pages, 5 Colour Figure

    Thinking is Bad: Implications of Human Error Research for Spreadsheet Research and Practice

    Full text link
    In the spreadsheet error community, both academics and practitioners generally have ignored the rich findings produced by a century of human error research. These findings can suggest ways to reduce errors; we can then test these suggestions empirically. In addition, research on human error seems to suggest that several common prescriptions and expectations for reducing errors are likely to be incorrect. Among the key conclusions from human error research are that thinking is bad, that spreadsheets are not the cause of spreadsheet errors, and that reducing errors is extremely difficult.Comment: 12 pages including reference

    Towards a Spreadsheet Engineering

    Full text link
    In this paper, we report some on-going focused research, but are further keen to set it in the context of a proposed bigger picture, as follows. There is a certain depressing pattern about the attitude of industry to spreadsheet error research and a certain pattern about conferences highlighting these issues. Is it not high time to move on from measuring spreadsheet errors to developing an armoury of disciplines and controls? In short, we propose the need to rigorously lay the foundations of a spreadsheet engineering discipline. Clearly, multiple research teams would be required to tackle such a big task. This suggests the need for both national and international collaborative research, since any given group can only address a small segment of the whole. There are already a small number of examples of such on-going international collaborative research. Having established the need for a directed research effort, the rest of the paper then attempts to act as an exemplar in demonstrating and applying this focus. With regard to one such of research, in a recent paper, Panko (2005) stated that: "...group development and testing appear to be promising areas to pursue". Of particular interest to us are some gaps in the published research record on techniques to reduce errors. We further report on the topics: techniques for cross-checking, time constraints effects, and some aspects of developer perception.Comment: 12 Pages, One Figur

    The Detection of Human Spreadsheet Errors by Humans versus Inspection (Auditing) Software

    Full text link
    Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study.Comment: 14 Pages, 4 Figure
    corecore