4,717 research outputs found

    The determination of measures of software reliability

    Get PDF
    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined

    In Media Res

    Get PDF
    We are inundated by a constant feed of media that responds and adapts in real time to the impulses of our psyches and the dimensions of our devices. Beneath the surface, this stream of information is directed by hidden, automated controls and steered by political agendas. The transmission of information has evolved into a spiral of entropy, and the boundaries between author, content, platform, and receiver have blurred. This reductive space of responsive media is a catalyst for immense political and cultural change, causing us to question our notions of authority, truth, and reality

    Understanding Enterprise Risk Across an Aquisition Portfolio: A Grounded Theory Approach

    Get PDF
    Every acquisition program contains risks. But what impact do these risks have on the entire portfolio of acquisition activities? What does risk at the Enterprise level really mean? For example, risk collectively could portend great danger to the acquisition manager’s overall portfolio which might be otherwise masked by traditional program performance and analysis. Alternatively, these risks also might represent opportunities to achieve greater results when analyzed from a portfolio perspective. Initial review of the literature suggests that most leaders are unable to articulate the risk carried by their portfolio of product development activities or what this means to them. However, the same literature suggests they strongly desire this capability. Beginning with a review of the applicable literature in the areas of risk, product development (acquisition) and product portfolio management, portfolio-level risk applications are found to be sparse and ill-conceived. Initial analysis of interviews with portfolio leaders involving military product development activities in portfolios of large, complex, system development will be presented with a discussion of the implications of enterprise risk for product portfolio management

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    Products of Their Environment? Nuclear Proliferation and the Emerging Multipolar International System

    Get PDF
    The world is shifting from a unipolar system following the end of the Cold War to a multipolar system that is ushered in by the rise of the rest. This change in the global structure has led some analysts to predict an increase in nuclear weapons proliferation caused by increased uncertainty and a decrease in alliances and security assurances. Nuclear proliferation, however, will not increase because these types of predictions are founded upon realist assumptions that inaccurately predict the characteristics of the emerging multipolar system as well as inaccurately understanding calculations of states with regard to nuclear weapons programs. I review a variety of literature concerning international politics theory and nuclear weapons forming a theoretical framework and use Iran and Turkey as case studies to test my hypothesis

    Optimization of Air Defense System Deployment Against Reconnaissance Drone Swarms

    Get PDF
    Due to their advantages in flexibility, scalability, survivability, and cost-effectiveness, drone swarms have been increasingly used for reconnaissance tasks and have posed great challenges to their opponents on modern battlefields. This paper studies an optimization problem for deploying air defense systems against reconnaissance drone swarms. Given a set of available air defense systems, the problem determines the location of each air defense system in a predetermined region, such that the cost for enemy drones to pass through the region would be maximized. The cost is calculated based on a counterpart drone path planning problem. To solve this adversarial problem, we first propose an exact iterative search algorithm for small-size problem instances, and then propose an evolutionary framework that uses a specific encoding-decoding scheme for large-size problem instances. We implement the evolutionary framework with six popular evolutionary algorithms. Computational experiments on a set of different test instances validate the effectiveness of our approach for defending against reconnaissance drone swarms

    The Effect of Task Load, Automation Reliability, and Environment Complexity on UAV Supervisory Control Performance

    Get PDF
    Over the last decade, military unmanned aerial vehicles (UAVs) have experienced exponential growth and now comprise over 40% of military aircraft. However, since most military UAVs require multiple operators (usually an air vehicle operator, payload operator, and mission commander), the proliferation of UAVs has created a manpower burden within the U.S. military. Fortunately, simultaneous advances in UAV automation have enabled a switch from direct control to supervisory control; future UAV operators will no longer directly control a single UAV subsystem but, rather, will control multiple advanced, highly autonomous UAVs. However, research is needed to better understand operator performance in a complex UAV supervisory control environment. The Naval Research Lab (NRL) developed SCOUT™ (Supervisory Control Operations User Testbed) to realistically simulate the supervisory control tasks that a future UAV operator will likely perform in a dynamic, uncertain setting under highly variable time constraints. The study reported herein used SCOUT to assess the effects of task load, environment complexity, and automation reliability on UAV operator performance and automation dependence. The effects of automation reliability on participants’ subjective trust ratings and the possible dissociation between task load and subjective workload ratings were also explored. Eighty-one Navy student pilots completed a 34:15 minute pre-scripted SCOUT scenario, during which they managed three helicopter UAVs. To meet mission goals, they decided how to best allocate the UAVs to locate targets while they maintained communications, updated UAV parameters, and monitored their sensor feeds and airspace. After completing training on SCOUT, participants were randomly sorted into low and high automation reliability groups. Within each group, task load (the number of messages and vehicle status updates that had to be made and the number of new targets that appeared) and environment complexity (the complexity of the payload monitoring task) were varied between low and high levels over the course of the scenario. Participants’ throughput, accuracy, and expected value in response to mission events were used to assess their performance. In addition, participants rated their subjective workload and fatigue using the Crew Status Survey. Finally, a four-item survey modeled after Lee and Moray’s validated (1994) scale was used to assess participants’ trust in the payload task automation and their self-confidence that they could have manually performed the payload task. This study contributed to the growing body of knowledge on operator performance within a UAV supervisory control setting. More specifically, it provided experimental evidence of the relationship between operator task load, task complexity, and automation reliability and their effects on operator performance, automation dependence, and operators’ subjective experiences of workload and fatigue. It also explored the relationship between automation reliability and operators’ subjective trust in said automation. The immediate goal of this research effort is to contribute to the development of a suite of domain-specific performance metrics to enable the development and/or testing and evaluation of future UAV ground control stations (GCS), particularly new work support tools and data visualizations. Long-term goals also include the potential augmentation of the current Aviation Selection Test Battery (ASTB) to better select future UAV operators and operational use of the metrics to determine mission-specific manpower requirements. In the far future, UAV-specific performance metrics could also contribute to the development of a dynamic task allocation algorithm for distributing control of UAVs amongst a group of operators

    Error Characterization of Vision-Aided Navigation Systems

    Get PDF
    The goal of this work is to characterize the errors committed by an Image Aided Navigation (IAN) algorithm that has been developed for use as a navigation tool in GPS denied areas. The filter under study was developed by the Air Force Institute of Technology\u27s Advanced Navigation Technology center, and has been the focus of numerous research efforts. Unfortunately, these studies have all been based on single runs or simulations, and such results may not be indicative of the true filter performance. This problem extends to IAN publications in general; no analysis of IAN based upon a sizable real world data collection appears in the literature. This issue is addressed by applying Monte Carlo analysis methods to a 100 run data set collected using a joystick controlled robot outfitted with an inertial unit and stereo cameras. The averaged error magnitudes are found to be within 1 m RMSE. In addition, optimism in the filter computed covariance is verified. Finally, two instances of filter divergence are explored, with the causes being traced to feature matching errors. The results of this work will support future research efforts by providing a baseline measure of filter performance against which prospective enhancements may be compared

    Approaching Dynamic PSA within CANDU 6 NPP

    Get PDF
    The outline of this dissertation is going to present the applications that are the subject of the work and also the lay down of work content. Chapter 1 reviews the conventional PSA main concepts, summarizes a short introduction history of Dynamic PSA (DPSA) and presents a non-exhaustive DPSA state-of-the-art with the recent and future developments. Chapter 2 presents the first application of the thesis, which is actually an introduction in the context of the Integrated Dynamic Decision Analysis (IDDA) code, that represents the main tool used in the attempt of approaching the Dynamic PSA. Starting from a description that reflects the level of knowledge about the system, IDDA code is able to develop all the scenarios of events compatible with the description received, from both points of view: either logical construction, or probabilistic coherence. By describing the system configuration and operation in a logically consistent manner, all the information is worked out by the code and is made available to the analyst as results in terms of system unavailability, minimal cut sets, uncertainty associated. The code allows also the association of different consequences that could be of interest for the analyst. The consequences could be of any type, such as economical, equipment outage time, etc.; for instance it can be considered an outage time for certain components of the system and then is calculated the “expected risk”. The association of consequences provides the inputs for a good decision making process. Chapter 3 represents the core applications of the present work. The applications purpose is the coupling between the logic probabilistics of the system or plant and associated phenomenology of primary heat transport system of a generic CANDU 6 NPP. First application is the coupling between the logic-probabilistic model of EWS system and associated phenomenology of primary heat transport system of CANDU 6 NPP. The considered plant transient is the total Loss of Main Feed-water with or without the coincident failure of the Emergency Water Supply System. The second application is considering the CANDU 6 Station Blackout as plant transient-consequential condition, moreover the loss of all AC power sources existing on the site. The transient scenarios development consider the possibility to recover the offsite grid and the use of mobile diesel generators in order to mitigate the accident consequences. The purpose is to challenge the plant design and response and to check if the plant conditions of a severe accident are reached. The plant response is challenged for short and long periods of time. The IDDA code allows interfacing the logic-probabilistic model of the system with the plant response in time, therefore with the evolution in time of the plant process variables. This allows raising sequences of possible events related in cause-consequence reasoning, each one giving place to a scenario with its development and its consequences. Therefore this allows acquiring the knowledge not only of which sequences of events are taking place, but also of the real environment in which they are taking place. Associating the system sequences that lead to system unavailability on demand with the resulting phenomenology proves to be a useful tool for the decision making process, both in the design phase and for the entire power plant life time. Chapter 4 presents future possible applications that could be developed with the present Dynamic PSA approach. A particular application could be the optimization or development of robust plant emergency operating procedures. In fact it consists in the coupling between the logic-probabilistics of the plant configurations corresponding to the Emergency Operating Procedure (EOP) and the associated phenomenology of the primary heat transport systems with the consideration for the plant safety systems. The application could highlight those situations where the plant fails either because of hardware failures or system dynamics and furthermore to reveal those situations where changing of the hardware states brings the process variables of the system state out of the system domain. A timeline course should be created for the process variables characterizing the plant state and that should reveal the time windows that operators have at disposition for intervention, in order to avoid potentially catastrophic conditions. Some week points in the EOP could be identified and then resolutions to be provided for their improvement, on the basis of sensitivity analyses. Chapter 5 presents the conclusions and the insights of the work and outlines possible improvements in terms of the present methodology proposed
    • …
    corecore