1,117 research outputs found

    Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu

    Full text link
    An Operating System (OS) combines multiple interdependent software packages, which usually have their own independently developed architectures. When a multitude of independent packages are placed together in an OS, an implicit inter-package architecture is formed. For an evolutionary effort, designers/developers of OS can greatly benefit from fully understanding the system-wide dependency focused on individual files, specifically executable files, and dynamically loadable libraries. We propose a framework, DepEx, aimed at discovering the detailed package relations at the level of individual binary files and their associated evolutionary changes. We demonstrate the utility of DepEx by systematically investigating the evolution of a large-scale Open Source OS, Ubuntu. DepEx enabled us to systematically acquire and analyze the dependencies in different versions of Ubuntu released between 2005 (5.04) to 2023 (23.04). Our analysis revealed various evolutionary trends in package management and their implications based on the analysis of the 84 consecutive versions available for download (these include beta versions). This study has enabled us to assert that DepEx can provide researchers and practitioners with a better understanding of the implicit software dependencies in order to improve the stability, performance, and functionality of their software as well as to reduce the risk of issues arising during maintenance, updating, or migration.Comment: This paper is accepted for publication in the 17th international conference on Software Architectur

    Implementation of an XML-based user interface with applications in ice sheet modeling

    Get PDF
    The scientific domain presents unique challenges to software developers. This thesis describes the application of design patterns to the problem of dynamically changing interfaces to scientific application software (GLIMMER, which performs ice sheet modeling). In its present form, GLIMMER uses a text configuration file to define model behavior, set parameters, and structure model input/output (I/O). The creation of the configuration file presents a significant problem to users due to its format and complexity. GLIMMER is still under development, and the number of changes to configuration parameters, parameter types, and parameter dependencies makes devel-opment of any single interface of use only for a short term. The application of design patterns described here resulted in an interface specification tool that then generates multiple versions of a user interface usable across a wide variety of configuration pa-rameter types, values, and dependencies. The resulting products have leveraged de-sign patterns and solved problems associated with design pattern usage not found in the specialized software engineering literature

    Distributed computing practice for large-scale science and engineering applications

    Get PDF
    It is generally accepted that the ability to develop large-scale distributed applications has lagged seriously behind other developments in cyberinfrastructure. In this paper, we provide insight into how such applications have been developed and an understanding of why developing applications for distributed infrastructure is hard. Our approach is unique in the sense that it is centered around half a dozen existing scientific applications; we posit that these scientific applications are representative of the characteristics, requirements, as well as the challenges of the bulk of current distributed applications on production cyberinfrastructure (such as the US TeraGrid). We provide a novel and comprehensive analysis of such distributed scientific applications. Specifically, we survey existing models and methods for large-scale distributed applications and identify commonalities, recurring structures, patterns and abstractions. We find that there are many ad hoc solutions employed to develop and execute distributed applications, which result in a lack of generality and the inability of distributed applications to be extensible and independent of infrastructure details. In our analysis, we introduce the notion of application vectors: a novel way of understanding the structure of distributed applications. Important contributions of this paper include identifying patterns that are derived from a wide range of real distributed applications, as well as an integrated approach to analyzing applications, programming systems and patterns, resulting in the ability to provide a critical assessment of the current practice of developing, deploying and executing distributed applications. Gaps and omissions in the state of the art are identified, and directions for future research are outlined

    Flight Simulator Model Integration for Supporting Pilot-in-the-Loop Testing in Model-Based Rotorcraft Design

    Get PDF
    Model-Based Design (MBD) enables iterative design practices and boosts the agility of the air vehicle development programs. Flight simulators are extensively employed in these programs for evaluating the handling qualities of the designed platforms. In order to keep up with the agility provided by the MBD, integration of the air vehicle models in fairly complex flight simulators needs to be addressed. The AVES Software Development Kit (SDK), which is the simulation software suite of DLR Air Vehicle Simulator (AVES), enables tackling the model integration starting from the modeler’s desktop. Additionally, 2Simulate, which is the enabling real-time simulation infrastructure of AVES SDK, provides automated model integration workflow for MATLAB/Simulink models using Simulink Coder code generation facilities. This paper presents the successful employment of AVES SDK and the 2Simulate model integration workflow for addressing integration challenges for Pilot-in-the-Loop Testing in AVES

    Design and Development of Software Tools for Bio-PEPA

    Get PDF
    This paper surveys the design of software tools for the Bio-PEPA process algebra. Bio-PEPA is a high-level language for modelling biological systems such as metabolic pathways and other biochemical reaction networks. Through providing tools for this modelling language we hope to allow easier use of a range of simulators and model-checkers thereby freeing the modeller from the responsibility of developing a custom simulator for the problem of interest. Further, by providing mappings to a range of different analysis tools the Bio-PEPA language allows modellers to compare analysis results which have been computed using independent numerical analysers, which enhances the reliability and robustness of the results computed.

    Contribution to Software Development Method based on Generalized Requirement Approach

    Get PDF
    Requirements’ gathering is one of the first steps in the software development process. Gathering business requirements, when the final product requirements are dictated by a known client, can be a difficult process. Even if the client knows their own business best, often their idea about a new business product is obscure, and described by general terms that contribute very much to common misunderstandings among the participants. Business requirement verification when the requirements are gathered using text and graphics can be a slow, error-prone, and expensive process. Misunderstandings and omitted requirements cause the need for revisions and increase project costs and delays. This research work proposes a new approach to the business software development process and is focused on the client’s understanding of how the business software development process works as well as a demonstration of the business requirement practices during requirement negotiation process. While the current software development process validates the business requirement at the end of the development process, this method implementation enables business requirement validation during the requirement negotiation phase. The process of the business requirement negotiation is guided by a set of predefined questions. These questions are guidelines for specifying a sufficient level of requirement details for generating executable code that is able to demonstrate each requirement. Effective implementation of the proposed method requires employment of the GRA Framework. Besides providing guidelines for requirement specification, the Framework shall create executable and provide the test environment for a requirement demonstration. This dissertation implements an example framework that is built around a central repository. Stored within the repository is the data collected during the requirement negotiations process. Access to the repository is managed by a Web interface that enables a collaborative and paperless environment. The result is that the data is stored in one place and updates are reflected to the stakeholders immediately. The executable code is generated by the Generator, a module that provides general programming units that are able to create source code files, databases, SQL statements, classes and methods, navigation menus, and demo applications, all from the data stored in the data repository. The generated software can then be used for the business requirement demonstration. This method assumes that any further development process is built around the requirements repository, which can provide continuous tracking of implementation changes. Besides readily documenting, tracking, and validating the requirements, this method addresses multiple requirement management syndromes such as the insufficient requirements description details provision, the IKIWISI (“I’ll know it when I see it”) Syndrome, the Yes, But Syndrome (“That is not exactly what I mean”), and the Undiscovered Ruins Syndrome (“Now that I see it, I have another requirement to add”).

    Coherent tool support for design-space exploration

    Get PDF
    Embedded Systems Innovation by TNO developed three generic tools to improve industrial applicability. POOSL provides an integrated editing, debugging and validating environment for system modelling, combined with a simulator. TRACE is a tool for visualizing quantitative analysis results. Design Framework (DF) aims for system architecting including architectural views, work flow support and the link of architectural reasoning to concrete modeling activities and artifacts. However, there is no integrated environment to support these three tools to work together. This project proposes a prototype to demonstrate cooperation between the three generic tools as an integrated environment for design-space exploration. The report describes the process that was applied to the new integrated environment, Exploration Experiment (EE). EE provides a platform to define an experiment by specifying a sequence of a model and executors. From DF, the user can execute a defined experiment and get the execution results automatically. In addition to the development of EE, this report also includes the process of the development of TRACE extensions. The extensions contain new functionalities of multiple Gantt Chart comparison and design-space visualizations, a standalone application and an executable JAVA Archive file

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure
    corecore