2,072 research outputs found

    The Requirements Editor RED

    Get PDF

    Software Evolution for Industrial Automation Systems. Literature Overview

    Get PDF

    Proceedings of the joint track "Tools", "Demos", and "Posters" of ECOOP, ECSA, and ECMFA, 2013

    Get PDF

    2017 DWH Long-Term Data Management Coordination Workshop Report

    Get PDF
    On June 7 and 8, 2017, the Coastal Response Research Center (CRRC)[1], NOAA Office of Response and Restoration (ORR) and NOAA National Marine Fisheries Service (NMFS) Restoration Center (RC), co-sponsored the Deepwater Horizon Oil Spill (DWH) Long Term Data Management (LTDM) workshop at the ORR Gulf of Mexico (GOM) Disaster Response Center (DRC) in Mobile, AL. There has been a focus on restoration planning, implementation and monitoring of the on-going DWH-related research in the wake of the DWH Natural Resource Damage Assessment (NRDA) settlement. This means that data management, accessibility, and distribution must be coordinated among various federal, state, local, non-governmental organizations (NGOs), academic, and private sector partners. The scope of DWH far exceeded any other spill in the U.S. with an immense amount of data (e.g., 100,000 environmental samples, 15 million publically available records) gathered during the response and damage assessment phases of the incident as well as data that continues to be produced from research and restoration efforts. The challenge with the influx in data is checking the quality, documenting data collection, storing data, integrating it into useful products, managing it and archiving it for long term use. In addition, data must be available to the public in an easily queried and accessible format. Answering questions regarding the success of the restoration efforts will be based on data generated for years to come. The data sets must be readily comparable, representative and complete; be collected using cross-cutting field protocols; be as interoperable as possible; meet standards for quality assurance/quality control (QA/QC); and be unhindered by conflicting or ambiguous terminology. During the data management process for the NOAA Natural Resource Damage Assessment (NRDA) for the DWH disaster, NOAA developed a data management warehouse and visualization system that will be used as a long term repository for accessing/archiving NRDA injury assessment data. This serves as a foundation for the restoration project planning and monitoring data for the next 15 or more years. The main impetus for this workshop was to facilitate public access to the DWH data collected and managed by all entities by developing linkages to or data exchanges among applicable GOM data management systems. There were 66 workshop participants (Appendix A) representing a variety of organizations who met at NOAA’s GOM Disaster Response Center (DRC) in order to determine the characteristics of a successful common operating picture for DWH data, to understand the systems that are currently in place to manage DWH data, and make the DWH data interoperable between data generators, users and managers. The external partners for these efforts include, but are not limited to the: RESTORE Council, Gulf of Mexico Research Initiative (GoMRI), Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC), the National Academy of Sciences (NAS) Gulf Research Program, Gulf of Mexico Alliance (GOMA), and National Fish and Wildlife Foundation (NFWF). The workshop objectives were to: Foster collaboration among the GOM partners with respect to data management and integration for restoration planning, implementation and monitoring; Identify standards, protocols and guidance for LTDM being used by these partners for DWH NRDA, restoration, and public health efforts; Obtain feedback and identify next steps for the work completed by the Environmental Disasters Data Management (EDDM) Working Groups; and Work towards best practices on public distribution and access of this data. The workshop consisted of plenary presentations and breakout sessions. The workshop agenda (Appendix B) was developed by the organizing committee. The workshop presentations topics included: results of a pre-workshop survey, an overview of data generation, the uses of DWH long term data, an overview of LTDM, an overview of existing LTDM systems, an overview of data management standards/ protocols, results from the EDDM working groups, flow diagrams of existing data management systems, and a vision on managing big data. The breakout sessions included discussions of: issues/concerns for data stakeholders (e.g., data users, generators, managers), interoperability, ease of discovery/searchability, data access, data synthesis, data usability, and metadata/data documentation. [1] A list of acronyms is provided on Page 1 of this report

    Inverse software configuration management

    Get PDF
    Software systems are playing an increasingly important role in almost every aspect of today’s society such that they impact on our businesses, industry, leisure, health and safety. Many of these systems are extremely large and complex and depend upon the correct interaction of many hundreds or even thousands of heterogeneous components. Commensurate with this increased reliance on software is the need for high quality products that meet customer expectations, perform reliably and which can be cost-effectively and safely maintained. Techniques such as software configuration management have proved to be invaluable during the development process to ensure that this is the case. However, there are a very large number of legacy systems which were not developed under controlled conditions, but which still, need to be maintained due to the heavy investment incorporated within them. Such systems are characterised by extremely high program comprehension overheads and the probability that new errors will be introduced during the maintenance process often with serious consequences. To address the issues concerning maintenance of legacy systems this thesis has defined and developed a new process and associated maintenance model, Inverse Software Configuration Management (ISCM). This model centres on a layered approach to the program comprehension process through the definition of a number of software configuration abstractions. This information together with the set of rules for reclaiming the information is stored within an Extensible System Information Base (ESIB) via, die definition of a Programming-in-the- Environment (PITE) language, the Inverse Configuration Description Language (ICDL). In order to assist the application of the ISCM process across a wide range of software applications and system architectures, die PISCES (Proforma Identification Scheme for Configurations of Existing Systems) method has been developed as a series of defined procedures and guidelines. To underpin the method and to offer a user-friendly interface to the process a series of templates, the Proforma Increasing Complexity Series (PICS) has been developed. To enable the useful employment of these techniques on large-scale systems, the subject of automation has been addressed through the development of a flexible meta-CASE environment, the PISCES M4 (MultiMedia Maintenance Manager) system. Of particular interest within this environment is the provision of a multimedia user interface (MUI) to die maintenance process. As a means of evaluating the PISCES method and to provide feedback into die ISCM process a number of practical applications have been modelled. In summary, this research has considered a number of concepts some of which are innovative in themselves, others of which are used in an innovative manner. In combination these concepts may be considered to considerably advance the knowledge and understanding of die comprehension process during the maintenance of legacy software systems. A number of publications have already resulted from the research and several more are in preparation. Additionally a number of areas for further study have been identified some of which are already underway as funded research and development projects

    Industrialising Software Development in Systems Integration

    No full text
    Compared to other disciplines, software engineering as of today is still dependent on craftsmanship of highly-skilled workers. However, with constantly increasing complexity and efforts, existing software engineering approaches appear more and more inefficient. A paradigm shift towards industrial production methods seems inevitable. Recent advances in academia and practice have lead to the availability of industrial key principles in software development as well. Specialization is represented in software product lines, standardization and systematic reuse are available with component-based development, and automation has become accessible through model-driven engineering. While each of the above is well researched in theory, only few cases of successful implementation in the industry are known. This becomes even more evident in specialized areas of software engineering such as systems integration. Today’s IT systems need to quickly adapt to new business requirements due to mergers and acquisitions and cooperations between enterprises. This certainly leads to integration efforts, i.e. joining different subsystems into a cohesive whole in order to provide new functionality. In such an environment. the application of industrial methods for software development seems even more important. Unfortunately, software development in this field is a highly complex and heterogeneous undertaking, as IT environments differ from customer to customer. In such settings, existing industrialization concepts would never break even due to one-time projects and thus insufficient economies of scale and scope. This present thesis, therefore, describes a novel approach for a more efficient implementation of prior key principles while considering the characteristics of software development for systems integration. After identifying the characteristics of the field and their affects on currently-known industrialization concepts, an organizational model for industrialized systems integration has thus been developed. It takes software product lines and adapts them in a way feasible for a systems integrator active in several business domains. The result is a three-tiered model consolidating recurring activities and reducing the efforts for individual product lines. For the implementation of component-based development, the present thesis assesses current component approaches and applies an integration metamodel to the most suitable one. This ensures a common understanding of systems integration across different product lines and thus alleviates component reuse, even across product line boundaries. The approach is furthermore aligned with the organizational model to depict in which way component-based development may be applied in industrialized systems integration. Automating software development in systems integration with model-driven engineering was found to be insufficient in its current state. The reason herefore lies in insufficient tool chains and a lack of modelling standards. As an alternative, an XML-based configuration of products within a software product line has been developed. It models a product line and its products with the help of a domain-specific language and utilizes stylesheet transformations to generate compliable artefacts. The approach has been tested for its feasibility within an exemplarily implementation following a real-world scenario. As not all aspects of industrialized systems integration could be simulated in a laboratory environment, the concept was furthermore validated during several expert interviews with industry representatives. Here, it was also possible to assess cultural and economic aspects. The thesis concludes with a detailed summary of the contributions to the field and suggests further areas of research in the context of industrialized systems integration

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
    corecore