269 research outputs found

    Source Code Protection for Applications Written in Microsoft Excel and Google Spreadsheet

    Get PDF
    Spreadsheets are used to develop application software that is distributed to users. Unfortunately, the users often have the ability to change the programming statements (“source code”) of the spreadsheet application. This causes a host of problems. By critically examining the suitability of spreadsheet computer programming languages for application development, six “application development features” are identified, with source code protection being the most important. We investigate the status of these features and discuss how they might be implemented in the dominant Microsoft Excel spreadsheet and in the new Google Spreadsheet. Although Google Spreadsheet currently provides no source code control, its web-centric delivery model offers technical advantages for future provision of a rich set of features. Excel has a number of tools that can be combined to provide “pretty good protection” of source code, but weak passwords reduce its robustness. User access to Excel source code must be considered a programmer choice rather than an attribute of the spreadsheet

    Accuracy in Spreadsheet Modelling Systems

    Get PDF
    Accuracy in spreadsheet modelling systems can be reduced due to difficulties with the inputs, the model itself, or the spreadsheet implementation of the model. When the true outputs from the system are unknowable, accuracy is evaluated subjectively. Less than perfect accuracy can be acceptable depending on the purpose of the model, problems with inputs, or resource constraints. Users build modelling systems iteratively, and choose to allocate limited resources to the inputs, the model, the spreadsheet implementation, and to employing the system for business analysis. When making these choices, users can suffer from expectation bias and diagnosis bias. Existing research results tend to focus on errors in the spreadsheet implementation. Because industry has tolerance for system inaccuracy, errors in spreadsheet implementations may not be a serious concern. Spreadsheet productivity may be of more interest

    Spreadsheets Grow Up: Three Spreadsheet Engineering Methodologies for Large Financial Planning Models

    Get PDF
    Many large financial planning models are written in a spreadsheet programming language (usually Microsoft Excel) and deployed as a spreadsheet application. Three groups, FAST Alliance, Operis Group, and BPM Analytics (under the name “Spreadsheet Standards Review Board”) have independently promulgated standardized processes for efficiently building such models. These spreadsheet engineering methodologies provide detailed guidance on design, construction process, and quality control. We summarize and compare these methodologies. They share many design practices, and standardized, mechanistic procedures to construct spreadsheets. We learned that a written book or standards document is by itself insufficient to understand a methodology. These methodologies represent a professionalization of spreadsheet programming, and can provide a means to debug a spreadsheet that contains errors. We find credible the assertion that these spreadsheet engineering methodologies provide enhanced productivity, accuracy and maintainability for large financial planning models

    A Paradigm for Spreadsheet Engineering Methodologies

    Get PDF
    Spreadsheet engineering methodologies are diverse and sometimes contradictory. It is difficult for spreadsheet developers to identify a spreadsheet engineering methodology that is appropriate for their class of spreadsheet, with its unique combination of goals, type of problem, and available time and resources. There is a lack of well-organized, proven methodologies with known costs and benefits for well-defined spreadsheet classes. It is difficult to compare and critically evaluate methodologies. We present a paradigm for organizing and interpreting spreadsheet engineering recommendations. It systematically addresses the myriad choices made when developing a spreadsheet, and explicitly considers resource constraints and other development parameters. This paradigm provides a framework for evaluation, comparison, and selection of methodologies, and a list of essential elements for developers or codifiers of new methodologies. This paradigm identifies gaps in our knowledge that merit further research

    Research Strategy and Scoping Survey on Spreadsheet Practices

    Get PDF
    We propose a research strategy for creating and deploying prescriptive recommendations for spreadsheet practice. Empirical data on usage can be used to create a taxonomy of spreadsheet classes. Within each class, existing practices and ideal practices can he combined into proposed best practices for deployment. As a first step we propose a scoping survey to gather non-anecdotal data on spreadsheet usage. The scoping survey will interview people who develop spreadsheets. We will investigate the determinants of spreadsheet importance, identify current industry practices, and document existing standards for creation and use of spreadsheets. The survey will provide insight into user attributes, spreadsheet importance, and current practices. Results will be valuable in themselves, and will guide future empirical research

    The Lookup Technique to Replace Nested-IF Formulas in Spreadsheet Programming

    Get PDF
    Spreadsheet programmers often implement contingent logic using a nested-IF formula even though this technique is difficult to test and audit and is believed to be risky. We interpret the programming of contingent logic in spreadsheets in the context of traditional computer programming. We investigate the “lookup technique” as an alternative to nested-IF formulas, describe its benefits for testing and auditing, and define its limitations. The lookup technique employs four distinct principles: 1) make logical tests visible; 2) make outcomes visible; 3) make logical structure visible; and 4) replace a multi-function nested-IF formula with a single-function lookup formula. It can be used only for certain simple contingent logic. We describe how the principles can be applied in more complex situations, and suggest avenues for further research

    Decreased proliferation of human melanoma cell lines caused by antisense RNA against translation factor eIF-4A1

    Get PDF
    Control of translation initiation was recognised as a critical checkpoint for cell proliferation and tumorigenesis. In human melanoma cells, we have previously reported consistent overexpression of translation initiation factor eIF-4A1. Here, we investigated by transfection of antisense constructs its significance for the control of melanoma cell growth. The tetracycline-inducible expression system was established in melanoma cells, and three fragments of the 5′-, central-, and 3′-portion of the eIF-4A1 cDNA were subcloned in antisense and in sense orientation after a tetracycline inducible promoter. Significant proliferation decrease was obtained after transient transfection and induction of antisense RNA directed against the 5′- and the central portion (up to 10%), whereas, no effects were seen after induction of the 3′-fragment and the sense controls. Cell clones stably transfected with the central antisense fragment revealed after doxycycline induction reduced expression of endogeneous eIF-4A1 mRNA correlated with decreased proliferation rates (up to 6%). These data demonstrate the applicability of antisense strategies against translation factors in melanoma cells. Translation initiation factor eIF-4A1 contributes to the control of melanoma cell proliferation and may be taken into consideration when scheduling new therapeutic approaches targeting the translational control

    First observations of separated atmospheric nu_mu and bar{nu-mu} events in the MINOS detector

    Get PDF
    The complete 5.4 kton MINOS far detector has been taking data since the beginning of August 2003 at a depth of 2070 meters water-equivalent in the Soudan mine, Minnesota. This paper presents the first MINOS observations of nuµ and [overline nu ]µ charged-current atmospheric neutrino interactions based on an exposure of 418 days. The ratio of upward- to downward-going events in the data is compared to the Monte Carlo expectation in the absence of neutrino oscillations, giving Rup/downdata/Rup/downMC=0.62-0.14+0.19(stat.)±0.02(sys.). An extended maximum likelihood analysis of the observed L/E distributions excludes the null hypothesis of no neutrino oscillations at the 98% confidence level. Using the curvature of the observed muons in the 1.3 T MINOS magnetic field nuµ and [overline nu ]µ interactions are separated. The ratio of [overline nu ]µ to nuµ events in the data is compared to the Monte Carlo expectation assuming neutrinos and antineutrinos oscillate in the same manner, giving R[overline nu ][sub mu]/nu[sub mu]data/R[overline nu ][sub mu]/nu[sub mu]MC=0.96-0.27+0.38(stat.)±0.15(sys.), where the errors are the statistical and systematic uncertainties. Although the statistics are limited, this is the first direct observation of atmospheric neutrino interactions separately for nuµ and [overline nu ]µ

    Syndromics: A Bioinformatics Approach for Neurotrauma Research

    Get PDF
    Substantial scientific progress has been made in the past 50 years in delineating many of the biological mechanisms involved in the primary and secondary injuries following trauma to the spinal cord and brain. These advances have highlighted numerous potential therapeutic approaches that may help restore function after injury. Despite these advances, bench-to-bedside translation has remained elusive. Translational testing of novel therapies requires standardized measures of function for comparison across different laboratories, paradigms, and species. Although numerous functional assessments have been developed in animal models, it remains unclear how to best integrate this information to describe the complete translational “syndrome” produced by neurotrauma. The present paper describes a multivariate statistical framework for integrating diverse neurotrauma data and reviews the few papers to date that have taken an information-intensive approach for basic neurotrauma research. We argue that these papers can be described as the seminal works of a new field that we call “syndromics”, which aim to apply informatics tools to disease models to characterize the full set of mechanistic inter-relationships from multi-scale data. In the future, centralized databases of raw neurotrauma data will enable better syndromic approaches and aid future translational research, leading to more efficient testing regimens and more clinically relevant findings

    Positive Darwinian Selection in the Piston That Powers Proton Pumps in Complex I of the Mitochondria of Pacific Salmon

    Get PDF
    The mechanism of oxidative phosphorylation is well understood, but evolution of the proteins involved is not. We combined phylogenetic, genomic, and structural biology analyses to examine the evolution of twelve mitochondrial encoded proteins of closely related, yet phenotypically diverse, Pacific salmon. Two separate analyses identified the same seven positively selected sites in ND5. A strong signal was also detected at three sites of ND2. An energetic coupling analysis revealed several structures in the ND5 protein that may have co-evolved with the selected sites. These data implicate Complex I, specifically the piston arm of ND5 where it connects the proton pumps, as important in the evolution of Pacific salmon. Lastly, the lineage to Chinook experienced rapid evolution at the piston arm
    corecore