18,043 research outputs found

    Impact of Errors in Operational Spreadsheets

    Full text link
    All users of spreadsheets struggle with the problem of errors. Errors are thought to be prevalent in spreadsheets, and in some instances they have cost organizations millions of dollars. In a previous study of 50 operational spreadsheets we found errors in 0.8% to 1.8% of all formula cells, depending on how errors are defined. In the current study we estimate the quantitative impacts of errors in 25 operational spreadsheets from five different organizations. We find that many errors have no quantitative impact on the spreadsheet. Those that have an impact often affect unimportant portions of the spreadsheet. The remaining errors do sometimes have substantial impacts on key aspects of the spreadsheet. This paper provides the first fully-documented evidence on the quantitative impact of errors in operational spreadsheets.Comment: 12 pages including reference

    In Pursuit of Spreadsheet Excellence

    Full text link
    The first fully-documented study into the quantitative impact of errors in operational spreadsheets identified an interesting anomaly. One of the five participating organisations involved in the study contributed a set of five spreadsheets of such quality that they set the organisation apart in a statistical sense. This virtuoso performance gave rise to a simple sampling test - The Clean Sheet Test - which can be used to objectively evaluate if an organisation is in control of the spreadsheets it is using in important processes such as financial reporting.Comment: 7 pages, 2 tables. To Appear Proc. European Spreadsheet Risks Interest Group, 200

    Error Estimation in Large Spreadsheets using Bayesian Statistics

    Full text link
    Spreadsheets are ubiquitous in business with the financial sector particularly heavily reliant on the technology. It is known that the level of spreadsheet error can be high and that it is often necessary to review spreadsheets based on a structured methodology which includes a cell by cell examination of the spreadsheet. This paper outlines the early research that has been carried out into the use of Bayesian Statistical methods to estimate the level of error in large spreadsheets during cell be cell examination based on expert knowledge and partial spreadsheet test data. The estimate can aid in the decision as to the quality of the spreadsheet and the necessity to conduct further testing or not.Comment: 12 Pages, 5 Colour Figure

    The Importance and Criticality of Spreadsheets in the City of London

    Full text link
    Spreadsheets have been with us in their present form for over a quarter of a century. We have become so used to them that we forget that we are using them at all. It may serve us well to stand back for a moment to review where, when and how we use spreadsheets in the financial markets and elsewhere in order to inform research that may guide their future development. In this article I bring together the experiences of a number of senior practitioners who have spent much of their careers working with large spreadsheets that have been and continue to be used to support major financial transactions and manage large institutions in the City of London. The author suggests that the City of London is presently exposed to significant reputational risk through the continued uncontrolled use of critical spreadsheets in the financial markets and elsewhere.Comment: 11 pages, with reference

    Managing Critical Spreadsheets in a Compliant Environment

    Full text link
    The use of uncontrolled financial spreadsheets can expose organizations to unacceptable business and compliance risks, including errors in the financial reporting process, spreadsheet misuse and fraud, or even significant operational errors. These risks have been well documented and thoroughly researched. With the advent of regulatory mandates such as SOX 404 and FDICIA in the U.S., and MiFID, Basel II and Combined Code in the UK and Europe, leading tax and audit firms are now recommending that organizations automate their internal controls over critical spreadsheets and other end-user computing applications, including Microsoft Access databases. At a minimum, auditors mandate version control, change control and access control for operational spreadsheets, with more advanced controls for critical financial spreadsheets. This paper summarises the key issues regarding the establishment and maintenance of control of Business Critical spreadsheets.Comment: 4 Page

    In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors

    Full text link
    Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.Comment: Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2011 ISBN 978-0-9566256-9-

    Controlling the Information Flow in Spreadsheets

    Full text link
    There is no denying that spreadsheets have become critical for all operational processes including financial reporting, budgeting, forecasting, and analysis. Microsoft Excel has essentially become a scratch pad and a data browser that can quickly be put to use for information gathering and decision-making. However, there is little control in how data comes into Excel, and how it gets updated. The information supply chain feeding into Excel remains ad hoc and without any centralized IT control. This paper discusses some of the pitfalls of the data collection and maintenance process in Excel. It then suggests service-oriented architecture (SOA) based information gathering and control techniques to ameliorate the pitfalls of this scratch pad while improving the integrity of data, boosting the productivity of the business users, and building controls to satisfy the requirements of Section 404 of the Sarbanes-Oxley Act.Comment: 9 page

    The Detection of Human Spreadsheet Errors by Humans versus Inspection (Auditing) Software

    Full text link
    Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study.Comment: 14 Pages, 4 Figure

    Sarbanes-Oxley: What About all the Spreadsheets?

    Full text link
    The Sarbanes-Oxley Act of 2002 has finally forced corporations to examine the validity of their spreadsheets. They are beginning to understand the spreadsheet error literature, including what it tells them about the need for comprehensive spreadsheet testing. However, controlling for fraud will require a completely new set of capabilities, and a great deal of new research will be needed to develop fraud control capabilities. This paper discusses the riskiness of spreadsheets, which can now be quantified to a considerable degree. It then discusses how to use control frameworks to reduce the dangers created by spreadsheets. It focuses especially on testing, which appears to be the most crucial element in spreadsheet controls.Comment: 45 pages, 7 figure

    An Exploratory Analysis of the Impact of Named Ranges on the Debugging Performance of Novice Users

    Full text link
    This paper describes an exploratory empirical study of the effect of named ranges on spreadsheet debugging performance. Named ranges are advocated in both academia and industry, yet no experimental evidence has been cited to back up these recommendations. This paper describes an exploratory experiment involving 21 participants that assesses the performance of novices debugging a spreadsheet containing named ranges. The results are compared with the performance of a different set of novices debugging the same spreadsheet without named ranges. The findings suggest that novice users debug on average significantly fewer errors if the spreadsheet contains named ranges. The purpose of the investigative study is to derive a detailed and coherent set of research questions regarding the impact of range names on the debugging performance and behaviour of spreadsheet users. These will be answered through future controlled experiments.Comment: 13 Pages, 4 Figures. Winner of the 2009 David Chadwick Prize for Best Student Pape
    • …
    corecore