28,664 research outputs found

    Management issues in systems engineering

    Get PDF
    When applied to a system, the doctrine of successive refinement is a divide-and-conquer strategy. Complex systems are sucessively divided into pieces that are less complex, until they are simple enough to be conquered. This decomposition results in several structures for describing the product system and the producing system. These structures play important roles in systems engineering and project management. Many of the remaining sections in this chapter are devoted to describing some of these key structures. Structures that describe the product system include, but are not limited to, the requirements tree, system architecture and certain symbolic information such as system drawings, schematics, and data bases. The structures that describe the producing system include the project's work breakdown, schedules, cost accounts and organization

    Regulator audit framework

    Get PDF
    Summary: The framework set out in this paper provides guidance for auditing the performance of regulators in regard to the compliance costs they impose on business and other regulated entities. It complements other frameworks that are used to assess the performance of regulators in regard to their efficiency and effectiveness, and processes for ex ante assessment of the impact of proposed regulations. The framework should be applied within institutional arrangements that establish the authority, resources, and mechanisms to hold regulators to account. For audits to improve regulator performance in this regard they need to: develop an audit plan in consultation with business and other stakeholders. This document should set out how the regulator will reduce compliance costs (good practice indicators), and how their achievement of this objective will be assessed (metrics) reward good performance and sanction poor performance comply with, and report against, the high level principles for good performance be public documents, with the audit plans and reports made available on the regulator\u27s website. In order for the audits to be undertaken in an effective and efficient way they should: focus on the principles and particular areas of regulator behaviour that have the greatest effect on the cost of compliance for businesses they regulate — these will differ across regulators select good practice indicators that best reflect regulator behaviour that minimises compliance costs while still achieving the objectives of the regulation provide metrics at the highest level possible to demonstrate the satisfaction of the principle or indicator, utilising data and information from existing sources where available require auditors to \u27triangulate\u27 information in forming a view of the satisfactory achievement of a principle be included as a separate module in external audits that examine broader areas of performance of the regulator and regulation. As part of the broader system that promotes regulation reform and reduces regulatory burden, oversight is needed to: ensure that audit plans are prepared and that both plans and audit reports are published coordinate the development of audit plans and audits to minimise the costs to business of participating in the process, and prioritise resources to where the potential for improvement is greatest facilitate feedback on the quality of the regulations and need for reform publish a report card facilitating comparison of the performance of regulators and lessons on approaches that have worked well in reducing compliance costs

    Technology Readiness Levels at 40: a study of state-of-the-art use, challenges, and opportunities

    Get PDF
    The technology readiness level (TRL) scale was introduced by NASA in the 1970s as a tool for assessing the maturity of technologies during complex system development. TRL data have been used to make multi-million dollar technology management decisions in programs such as NASA's Mars Curiosity Rover. This scale is now a de facto standard used for technology assessment and oversight in many industries, from power systems to consumer electronics. Low TRLs have been associated with significantly reduced timeliness and increased costs across a portfolio of US Department of Defense programs. However, anecdotal evidence raises concerns about many of the practices related to TRLs. We study TRL implementations based on semi-structured interviews with employees from seven different organizations and examine documentation collected from industry standards and organizational guidelines related to technology development and demonstration. Our findings consist of 15 challenges observed in TRL implementations that fall into three different categories: system complexity, planning and review, and validity of assessment. We explore research opportunities for these challenges and posit that addressing these opportunities, either singly or in groups, could improve decision processes and performance outcomes in complex engineering projects

    Unconventional machine learning of genome-wide human cancer data

    Full text link
    Recent advances in high-throughput genomic technologies coupled with exponential increases in computer processing and memory have allowed us to interrogate the complex aberrant molecular underpinnings of human disease from a genome-wide perspective. While the deluge of genomic information is expected to increase, a bottleneck in conventional high-performance computing is rapidly approaching. Inspired in part by recent advances in physical quantum processors, we evaluated several unconventional machine learning (ML) strategies on actual human tumor data. Here we show for the first time the efficacy of multiple annealing-based ML algorithms for classification of high-dimensional, multi-omics human cancer data from the Cancer Genome Atlas. To assess algorithm performance, we compared these classifiers to a variety of standard ML methods. Our results indicate the feasibility of using annealing-based ML to provide competitive classification of human cancer types and associated molecular subtypes and superior performance with smaller training datasets, thus providing compelling empirical evidence for the potential future application of unconventional computing architectures in the biomedical sciences

    The Integrated Medical Model: Statistical Forecasting of Risks to Crew Health and Mission Success

    Get PDF
    The Integrated Medical Model (IMM) helps capture and use organizational knowledge across the space medicine, training, operations, engineering, and research domains. The IMM uses this domain knowledge in the context of a mission and crew profile to forecast crew health and mission success risks. The IMM is most helpful in comparing the risk of two or more mission profiles, not as a tool for predicting absolute risk. The process of building the IMM adheres to Probability Risk Assessment (PRA) techniques described in NASA Procedural Requirement (NPR) 8705.5, and uses current evidence-based information to establish a defensible position for making decisions that help ensure crew health and mission success. The IMM quantitatively describes the following input parameters: 1) medical conditions and likelihood, 2) mission duration, 3) vehicle environment, 4) crew attributes (e.g. age, sex), 5) crew activities (e.g. EVA's, Lunar excursions), 6) diagnosis and treatment protocols (e.g. medical equipment, consumables pharmaceuticals), and 7) Crew Medical Officer (CMO) training effectiveness. It is worth reiterating that the IMM uses the data sets above as inputs. Many other risk management efforts stop at determining only likelihood. The IMM is unique in that it models not only likelihood, but risk mitigations, as well as subsequent clinical outcomes based on those mitigations. Once the mathematical relationships among the above parameters are established, the IMM uses a Monte Carlo simulation technique (a random sampling of the inputs as described by their statistical distribution) to determine the probable outcomes. Because the IMM is a stochastic model (i.e. the input parameters are represented by various statistical distributions depending on the data type), when the mission is simulated 10-50,000 times with a given set of medical capabilities (risk mitigations), a prediction of the most probable outcomes can be generated. For each mission, the IMM tracks which conditions occurred and decrements the pharmaceuticals and supplies required to diagnose and treat these medical conditions. If supplies are depleted, then the medical condition goes untreated, and crew and mission risk increase. The IMM currently models approximately 30 medical conditions. By the end of FY2008, the IMM will be modeling over 100 medical conditions, approximately 60 of which have been recorded to have occurred during short and long space missions

    A fuzzy clustering methodology to analyze interfaces and assess integration risks in large-scale systems

    Get PDF
    “Interface analysis and integration risk assessment for a large-scale, complex system is a difficult systems engineering task, but critical to the success of engineering systems with extraordinary capabilities. When dealing with large-scale systems there is little time for data gathering and often the analysis can be overwhelmed by unknowns and sometimes important factors are not measurable because of the complexities of the interconnections within the system. This research examines the significance of interface analysis and management, identifies weaknesses in literature on risk assessment for a complex system, and exploits the benefits of soft computing approaches in the interface analysis in a complex system and in the risk assessment of system integration readiness. The research aims to address some of the interface analysis challenges in a large-scale system development lifecycle such as the ones often experienced in aircraft development. The resulting product from this research is contributed to systems engineering by providing an easy-to-use interface assessment and methodology for a trained systems engineer to break the system into communities of dense interfaces and determine the integration readiness and risks based on those communities. As a proof of concept this methodology is applied on a power seat system in a commercial aircraft with data from the Critical Design Review”--Abstract, page iv

    Identifying Design Strategies to Mitigate the Risk Introduced into New Product Development by Suppliers

    Get PDF
    For every organization, an efficient and effective product development process is a key to generate and manage growth opportunities. Often strategic relationships with key suppliers and partners are required as organizations do not have all the competencies that are crucial to the development of a product. This is particularly true for Original Design Manufacturer (ODM) and Joint Development Manufacturer (JDM) supplier relationships, which are characterized by a high degree of supplier involvement in every stage of product development. If the interactions with these key suppliers are not managed properly, there is significant risk that the endeavor will end up with missing budget, schedule and cost goals, particularly for complex systems. Little attention in the literature, however, has been given to the risk introduced by suppliers into the product development process nor mitigating this risk through appropriate design strategies. This thesis addresses the need to develop a risk assessment methodology that would not only identify areas of concern but also identify potential design strategies to mitigate risk. In this work, metrics are derived to quantify the relative importance, degree of change, difficulty of change and degree of coupling for engineering metrics at system and subsystem levels. From these metrics, a framework is developed to quantitatively assess the risk due to supplier interactions. In addition, design strategies identified in the literature are characterized in terms of these same metrics to determine the design strategy which is most suited to mitigate the risk associated with a particular EM. Finally, a case study is presented for the hypothetical development of a 3D printer, to assess initial feasibility and utility of the framework

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing \approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page
    corecore