17 research outputs found

    Using the Vickrey-Clarke-Groves Auction Mechanism for Enhanced Bandwidth Allocation in Tactical Data Networks

    No full text
    A mechanism is an institution such as an auction, voting protocol, or a market that defines the rules for how humans are allowed to interact, and governs the procedure for how collective decisions are made. Computational mechanisms arise where computational agents work on behalf of humans. This report describes an investigation of the potential for using computational mechanisms to improve the quality of a combat group's common operating picture, in a setting where network bandwidth is scarce. Technical details are provided about a robust emulation of a tactical data network (based loosely on the Navy LINK-11) that was developed for the study. The report also outlines the basic principles of mechanism design, as well as the features of the Vickrey-Clarke-Groves (VCG) auction mechanism implemented for the study. The report describes how the VCG mechanism was used to allocate network bandwidth for sensor data fusion. Empirical results of the investigation are presented, and ideas for further exploration are offered. The overall conclusion of the study is that computational mechanism design is a promising alternative to traditional systems approaches to resource allocation in systems that are highly dynamic, involve many actors engaged in varying activities, and have varying—and possibly competing—goals

    Robustness Testing of Software-Intensive Systems: Explanation and Guide

    No full text
    Many Department of Defense (DoD) programs engage in what has been called "happy-path testing" (that is, testing that is only meant to show that the system meets its functional requirements). While testing to ensure that requirements are met is necessary, often tests aimed at ensuring that the system handles errors and failures appropriately are neglected. Robustness has been defined by the Food and Drug Administration as "the degree to which a software system or component can function correctly in the presence of invalid inputs or stressful environmental conditions." This technical note provides guidance and procedures for performing robustness testing as part of DoD or federal acquisition programs that have a software component. It includes background on the need for robustness testing and describes how robustness testing fits into DoD acquisition, including source selection issues, development issues, and developmental and operational testing issues

    Pin Component Technology (V1.0) and Its C Interface

    No full text
    Pin is a basic, simple component technology suitable for building embedded software applications. Pin implements the container idiom for software components. Containers provide a prefabricated shell in which custom code executes and through which all interactions between custom code and its external environment are mediated. Pin is a component technology for pure assembly—systems are assembled by selecting components and connecting their interfaces (which are composed of communication channels called pins). This report describes the main concepts of Pin and documents the C-language interface to Pin V1.0

    Perspectives on Open Source Software

    No full text
    Open source software (OSS) is emerging as the software community's next "silver bullet" and appears to be playing a significant role in the acquisition and development plans of the Department of Defense (DoD) and industry. Yet, as with all previous silver bullets, there are problems with blindly embracing the OSS paradigm. To become familiar with the benefits and pitfalls of using OSS, the Software Engineering Institute (SEI) undertook an internally funded study looking at it from various perspectives: 1) the user of OSS, 2) the developer of OSS, 3) the organizations looking to deploy software systems comprised (partially or completely) of OSS components During the period of this study, members of the SEI technical staff hosted meetings, conducted interviews, participated in open source development activities, workshops, and conferences, and studied available literature on the subject. Through these activities, the authors have been able to support and sometimes refute common perceptions about OSS. This report is the result of their study

    Improving the Automated Detection and Analysis of Secure Coding Violations

    No full text
    <p>Coding errors cause the majority of software vulnerabilities. For example, 64% of the nearly 2,500 vulnerabilities in the National Vulnerability Database in 2004 were caused by programming errors. The CERT Division’s Source Code Analysis Laboratory (SCALe) offers conformance testing of C language software systems against the CERT C Secure Coding Standard and the CERT Oracle Secure Coding Standard for Java, using various analysis tools available from commercial software vendors. Unfortunately, the current SCALe analysis process and tools do not collect any statistics about the accuracy of the code analysis tools or about the coding violations they flag, such as frequency of occurrence. This paper describes the approach used to add the ability to collect and statistically analyze data regarding coding violations and tool characteristics along with the initial results. The collected data will be used over time to improve the effectiveness of the SCALe analysis.</p

    Predicting the Behavior of a Highly Configurable Component Based Real-Time System

    No full text
    Software components and the technology supporting component based software engineering contribute greatly to the rapid development and configuration of systems for a variety of application domains. Such domains go beyond desktop office applications and information systems supporting e-commerce, but include systems having real-time performance requirements and critical functionality. Discussed in this paper are the results from an experiment that demonstrates the ability to predict deadline satisfaction of threads in a real-time system where the functionality performed is based on the configuration of the assembled software components. Presented is the method used to abstract the large, legacy code base of the system software and the application software components in the system; the model of those abstractions based on available architecture documentation and empirically-based, runtime observations; and the analysis of the predictions which yielded objective confidence in the observations and model created which formed the underlying basis for the predictions

    Proceedings of the System of Systems Interoperability Workshop

    No full text
    The Software Engineering Institute has initiated an internal research and development effort to investigate interoperability between systems. As part of the research, a workshop was held in February 2003 with an advisory board of Department of Defense experts. A preliminary model of interoperability was presented and feedback on the model was requested. This technical note documents the model of interoperability presented and the findings from the workshop

    An Enterprise Information System Data Architecture Guide

    No full text
    Data architecture defines how data is stored, managed, and used in a system. It establishes common guidelines for data operations that make it possible to predict, model, gauge, and control the flow of data in the system. This is even more important when system components are developed by or acquired from different contractors or vendors. This report describes a sample data architecture in terms of a collection of generic architectural patterns that both define and constrain how data is managed in a system that uses the Java2 Enterprise Edition (J2EE) platform and the Open Applications Group Integration Specification (OAGIS). Each of these data architectural patterns illustrates a common data operation and how it is implemented in a system

    Maintaining Transactional Context: A Model Problem

    No full text
    Due to their size and complexity, modernizing enterprise systems often requires that new functionality be developed and deployed incrementally. As modernized functionality is deployed incrementally, transactions that were processed entirely in the legacy system may now be distributed across both legacy and modernized components. In this report, we investigate the construction of adapters for a modernization effort that can maintain a transactional context between legacy and modernized components. One technique that is particularly useful in technology and product evaluations is the use of model problems—focused experimental prototypes that reveal technology/product capabilities, benefits, and limitations in well-bounded ways. This report describes a model problem used to verify that such a mechanism exists and could be used to support the modernization of a legacy system. In this report, we describe a model problem constructed to verify the feasibility of building this mechanism. We also discuss the results of our investigation including the problems we encountered during the construction of the model problem and workarounds that were discovered

    Maintaining Transactional Context: A Model Problem

    No full text
    Due to their size and complexity, modernizing enterprise systems often requires that new functionality be developed and deployed incrementally. As modernized functionality is deployed incrementally, transactions that were processed entirely in the legacy system may now be distributed across both legacy and modernized components. In this report, we investigate the construction of adapters for a modernization effort that can maintain a transactional context between legacy and modernized components. One technique that is particularly useful in technology and product evaluations is the use of model problems—focused experimental prototypes that reveal technology/product capabilities, benefits, and limitations in well-bounded ways. This report describes a model problem used to verify that such a mechanism exists and could be used to support the modernization of a legacy system. In this report, we describe a model problem constructed to verify the feasibility of building this mechanism. We also discuss the results of our investigation including the problems we encountered during the construction of the model problem and workarounds that were discovered
    corecore