6,530 research outputs found

    The 2GCHAS: A high productivity software development environment

    Get PDF
    To the user, the most visible feature of the Transportable Applications Executive (TAE) is its very powerful user interface. To the programmer, TAE's user interface, proc concept, standardized interface definitions, and hierarchy search provide a set of tools for rapidly prototyping or developing production software. The 2GCHAS (Second Generation Comprehensive Helicopter Analysis System) project has extended and enhanced these mechanisms, creating a powerful and high productivity programming environment where the 2GCHAS development environment is 2GCHAS itself and where a sustained rate for certified, documented, and tested software above 30 delivered source instructions per programmer day has been achieved. The 2GCHAS environment is not limited to helicopter analysis, but is applicable to other disciplines where software development is important

    Research Articles in Simplified HTML: a Web-first format for HTML-based scholarly articles

    Get PDF
    Purpose. This paper introduces the Research Articles in Simplified HTML (or RASH), which is a Web-first format for writing HTML-based scholarly papers; it is accompanied by the RASH Framework, a set of tools for interacting with RASH-based articles. The paper also presents an evaluation that involved authors and reviewers of RASH articles submitted to the SAVE-SD 2015 and SAVE-SD 2016 workshops. Design. RASH has been developed aiming to: be easy to learn and use; share scholarly documents (and embedded semantic annotations) through the Web; support its adoption within the existing publishing workflow. Findings. The evaluation study confirmed that RASH is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are familiar with HTML. Research Limitations. The evaluation study also highlighted some issues in the adoption of RASH, and in general of HTML formats, especially by less technically savvy users. Moreover, additional tools are needed, e.g., for enabling additional conversions from/to existing formats such as OpenXML. Practical Implications. RASH (and its Framework) is another step towards enabling the definition of formal representations of the meaning of the content of an article, facilitating its automatic discovery, enabling its linking to semantically related articles, providing access to data within the article in actionable form, and allowing integration of data between papers. Social Implications. RASH addresses the intrinsic needs related to the various users of a scholarly article: researchers (focussing on its content), readers (experiencing new ways for browsing it), citizen scientists (reusing available data formally defined within it through semantic annotations), publishers (using the advantages of new technologies as envisioned by the Semantic Publishing movement). Value. RASH helps authors to focus on the organisation of their texts, supports them in the task of semantically enriching the content of articles, and leaves all the issues about validation, visualisation, conversion, and semantic data extraction to the various tools developed within its Framework

    A study of the portability of an Ada system in the software engineering laboratory (SEL)

    Get PDF
    A particular porting effort is discussed, and various statistics on analyzing the portability of Ada and the total staff months (overall and by phase) required to accomplish the rehost, are given. This effort is compared to past experiments on the rehosting of FORTRAN systems. The discussion includes an analysis of the types of errors encountered during the rehosting, the changes required to rehost the system, experiences with the Alsys IBM Ada compiler, the impediments encountered, and the lessons learned during this study

    Update of GRASP/Ada reverse engineering tools for Ada

    Get PDF
    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped a new algorithmic level graphical representation of Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis was on the automatic generation of the CSD from Ada PDL or source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. In Phase 1 of the GRASP/Ada project, the CSD graphical constructs were created and applied manually to several small Ada programs. A prototype (Version 1) was designed and implemented using FLEX and BISON running under VMS on a VAS 11-780. In Phase 2, the prototype was improved and ported to the Sun 4 platform under UNIX. A user interface was designed and partially implemented using the HP widget toolkit and the X Windows System. In Phase 3, the user interface was extensively reworked using the Athena widget toolkit and X Windows. The prototype was applied successfully to numerous Ada programs ranging in size from several hundred to several thousand lines of source code. Following Phase 3, the prototype was evaluated by software engineering students at Auburn University and then updated with significant enhancements to the user interface including editing capabilities. Version 3.2 of the prototype was prepared for limited distribution to facilitate further evaluation. The current prototype provides the capability for the user to generate CSD's from Ada PDL or source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application

    Combining Multiple Web Accessibility Evaluation Reports using Semantic Web Technologies

    Get PDF
    This paper describes a process for automatic combination of testing reports for the accessibility of Web applications, obtained by different testing tools and applying different standards on Web accessibility. Interoperability is guaranteed using semantic Web technologies, which allow describing the reports by RDF (Resource Description Framework) triples. The reports refer to elements of a knowledge base consisting of vocabularies, ontologies and rules of inference, in which the conceptual relations between accessibility standards, as WCAG (Web Content Accessibility Guidelines) or Section 508 among others, are formalized previously. A software prototype that uses the Apache Jena framework for implementing the process is presented

    GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    Get PDF
    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library

    Quadruplex digital flight control system assessment

    Get PDF
    Described are the development and validation of a double fail-operational digital flight control system architecture for critical pitch axis functions. Architectural tradeoffs are assessed, system simulator modifications are described, and demonstration testing results are critiqued. Assessment tools and their application are also illustrated. Ultimately, the vital role of system simulation, tailored to digital mechanization attributes, is shown to be essential to validating the airworthiness of full-time critical functions such as augmented fly-by-wire systems for relaxed static stability airplanes

    ChatGPT v Bard v Bing v Claude 2 v Aria v human-expert. How good are AI chatbots at scientific writing? (ver. 23Q3)

    Full text link
    Historically, proficient writing was deemed essential for human advancement, with creative expression viewed as one of the hallmarks of human achievement. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness, while qualitative precision gauged the scientific contribution. While the AI chatbots, especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, they failed in generating original scientific content. As a side note, our results also suggest that with ChatGPT-4 the size of the LLMs has plateaued. Furthermore, the paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, which highlights the challenges AI chatbots face in emulating human originality in scientific writing. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect that this will change in the near future with the evolution of current LLM-based AI chatbots towards LLM-powered software.Comment: Non-peer reviewed preprint. Includes Graphical abstract, 8 Figures. Appendices are linked and deposited at Zenod

    Spurious Regression and Econometric Trends

    Get PDF
    This paper analyses the asymptotic and finite sample implications of different types of nonstationary behavior among the dependent and explanatory variables in a linear spurious regression model. We study cases when the nonstationarity in the dependent and explanatory variables is deterministic as well as stochastic. In particular, we derive the order in probability of the t-statistic in a linear regression equation under a variety of empirically relevant data generation processes, and show that the spurious regression phenomenon is present in all cases considered, when at least one of the variables behaves in a nonstationary way. Simulation experiments confirm our asymptotic results.Spurious regression, trends, unit roots, trend stationarity, structural breaks
    • …
    corecore