107,091 research outputs found

    PINT: A Modern Software Package for Pulsar Timing

    Get PDF
    Over the past few decades, the measurement precision of some pulsar-timing experiments has advanced from ~10 us to ~10 ns, revealing many subtle phenomena. Such high precision demands both careful data handling and sophisticated timing models to avoid systematic error. To achieve these goals, we present PINT (PINT Is Not Tempo3), a high-precision Python pulsar timing data analysis package, which is hosted on GitHub and available on Python Package Index (PyPI) as pint-pulsar. PINT is well-tested, validated, object-oriented, and modular, enabling interactive data analysis and providing an extensible and flexible development platform for timing applications. It utilizes well-debugged public Python packages (e.g., the NumPy and Astropy libraries) and modern software development schemes (e.g., version control and efficient development with git and GitHub) and a continually expanding test suite for improved reliability, accuracy, and reproducibility. PINT is developed and implemented without referring to, copying, or transcribing the code from other traditional pulsar timing software packages (e.g., TEMPO and TEMPO2) and therefore provides a robust tool for cross-checking timing analyses and simulating pulse arrival times. In this paper, we describe the design, usage, and validation of PINT, and we compare timing results between it and TEMPO and TEMPO2.Comment: Re-submitted to the Astrophysical Journal at December 31st, 202

    PINT: A Modern Software Package for Pulsar Timing

    Get PDF
    Over the past few decades, the measurement precision of some pulsar timing experiments has advanced from ~10 Îźs to ~10 ns, revealing many subtle phenomena. Such high precision demands both careful data handling and sophisticated timing models to avoid systematic error. To achieve these goals, we present PINT (PINT Is Not Tempo3), a high-precision Python pulsar timing data analysis package, which is hosted on GitHub and available on the Python Package Index (PyPI) as pint-pulsar. PINT is well tested, validated, object oriented, and modular, enabling interactive data analysis and providing an extensible and flexible development platform for timing applications. It utilizes well-debugged public Python packages (e.g., the NumPy and Astropy libraries) and modern software development schemes (e.g., version control and efficient development with git and GitHub) and a continually expanding test suite for improved reliability, accuracy, and reproducibility. PINT is developed and implemented without referring to, copying, or transcribing the code from other traditional pulsar timing software packages (e.g., Tempo/Tempo2) and therefore provides a robust tool for cross-checking timing analyses and simulating pulse arrival times. In this paper, we describe the design, use, and validation of PINT, and we compare timing results between it and Tempo and Tempo2

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics

    A framework for the simulation of structural software evolution

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2008 ACM.As functionality is added to an aging piece of software, its original design and structure will tend to erode. This can lead to high coupling, low cohesion and other undesirable effects associated with spaghetti architectures. The underlying forces that cause such degradation have been the subject of much research. However, progress in this field is slow, as its complexity makes it difficult to isolate the causal flows leading to these effects. This is further complicated by the difficulty of generating enough empirical data, in sufficient quantity, and attributing such data to specific points in the causal chain. This article describes a framework for simulating the structural evolution of software. A complete simulation model is built by incrementally adding modules to the framework, each of which contributes an individual evolutionary effect. These effects are then combined to form a multifaceted simulation that evolves a fictitious code base in a manner approximating real-world behavior. We describe the underlying principles and structures of our framework from a theoretical and user perspective; a validation of a simple set of evolutionary parameters is then provided and three empirical software studies generated from open-source software (OSS) are used to support claims and generated results. The research illustrates how simulation can be used to investigate a complex and under-researched area of the development cycle. It also shows the value of incorporating certain human traits into a simulation—factors that, in real-world system development, can significantly influence evolutionary structures

    Software development: A paradigm for the future

    Get PDF
    A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented

    Annotated bibliography of Software Engineering Laboratory literature

    Get PDF
    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. All materials have been grouped into eight general subject areas for easy reference: The Software Engineering Laboratory; The Software Engineering Laboratory: Software Development Documents; Software Tools; Software Models; Software Measurement; Technology Evaluations; Ada Technology; and Data Collection. Subject and author indexes further classify these documents by specific topic and individual author

    Annotated bibliography of software engineering laboratory literature

    Get PDF
    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: the Software Engineering Laboratory; the Software Engineering Laboratory-software development documents; software tools; software models; software measurement; technology evaluations; Ada technology; and data collection. Subject and author indexes further classify these documents by specific topic and individual author

    Annotated bibliography of software engineering laboratory literature

    Get PDF
    An annotated bibliography is presented of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory. The bibliography was updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials were grouped into eight general subject areas for easy reference: (1) The Software Engineering Laboratory; (2) The Software Engineering Laboratory: Software Development Documents; (3) Software Tools; (4) Software Models; (5) Software Measurement; (6) Technology Evaluations; (7) Ada Technology; and (8) Data Collection. Subject and author indexes further classify these documents by specific topic and individual author

    Quantitative Assessment of the Impact of Automatic Static Analysis Issues on Time Efficiency

    Get PDF
    Background: Automatic Static Analysis (ASA) tools analyze source code and look for code patterns (aka smells) that might cause defective behavior or might degrade other dimensions of software quality, e.g. efficiency. There are many potentially negative code patterns, and ASA tools typically report a huge list of them even in small programs. Moreover, so far, little evidence is available about the negative impact on performance of code patterns identified by such tools. A consequence is that programmers cannot appreciate the benefits of ASA tools and tend not to include them in their workflow. Aims: Quantitatively assess the impact of issues signaled by ASA tools on time efficiency. Method: We select 20 issues and for each of them we set up two source code fragments: one containing the issue and the corresponding refactored version, functionally identical but without the issue. We set up three different platforms, isolated from network and other user programs, then we execute the code fragments, and measure the execution time of both code versions. Results: We find that eleven issues have an actual negative impact on performance. We also compute for each issue an estimation for the delay provoked by a single execution. Conclusions: We produce a set of issues with a verified negative impact on performance. They can be checked easily with an analysis tool and code can be refactored to obtain a provably more efficient code. We also provide the estimated delay cost of each issue in the environments where we conduct the tests. These results can be improved with the help of other researchers: repeating the tests in several platforms would make it possible to build up a wider benchmar
    • …
    corecore