62 research outputs found

    Two-dimensional Pareto frontier forecasting for technology planning and roadmapping

    Get PDF
    Technology evolution forecasting based on historical data processing is a useful tool for quantitative analysis in technology planning and roadmapping. While previous efforts focused mainly on one-dimensional forecasting, real technical systems require the evaluation of multiple and conflicting figures of merit at the same time, such as cost and performance. This paper presents a methodology for technology forecasting based on Pareto (efficient) frontier estimation algorithms and multiple regressions in presence of at least two conflicting figures of merits. A tool was developed on the basis of the approach presented in this paper. The methodology is illustrated with a case study from the automotive industry. The paper also shows the validation of the methodology and the estimation of the forecast accuracy adopting a backward testing procedure

    Model-based approaches for technology planning and roadmapping: Technology forecasting and game-theoretic modeling

    Get PDF
    This paper proposes a novel model-based approach to technology planning and roadmapping, consisting of two complementary steps: technology forecasting and game-theoretic planning. The inherent uncertainty of target technology performances, timelines and risks impact the roadmapping process. Reducing this uncertainty is a major challenge and allows elaborating different options and scenarios. A formal methodology is proposed for quantitative forecasting in a multi-dimensional space (different performance metrics and time) based on past technology development trend information. The method adopts concepts and approaches from econometrics and is formulated as a convex optimization problem with different constraints on the frontier’s shape. It provides useful product line assessment benchmarks and helps to set reasonable goals for future technology developments. Game-theoretic planning allows addressing the strategic decisions to take, considering the technology land-scape, markets, and competition. The strategic decisions affect in turn other companies as well, which is the basis for the application of game theory, in the form of best-response functions to determine the subsequent reactions and movements of rivals in a technological landscape. The result is a simulation of a sequential game in tech-nology space, allowing evaluating possible technological development pathways and determining optimal models on the Pareto frontiers, potential targets for technology roadmapping

    Comparative analysis of two-dimensional data-driven efficient frontier estimation algorithms

    Get PDF
    In this paper we show how the mathematical apparatus developed originally in the field of econometrics and portfolio optimization can be utilized for purposes of conceptual design, requirements engineering and technology roadmapping. We compare popular frontier estimation models and propose an efficient and robust nonparametric estimation algorithm for twodimensional frontier approximation. The proposed model allows to relax the convexity assumptions and thus enable estimating a broader range of possible technology frontier shapes compared to the state of the art. Using simulated datasets we show how the accuracy and the robustness of alternative methods such as Data Envelopment Analysis and nonparametric and parametric statistical models depend on the size of the dataset and on the shape of the frontier

    Competition-driven figures of merit in technology roadmap planning

    Get PDF
    Based on the intense technological environment requiring early and accurate analysis, this paper proposes competition based on figures of merit analysis in the context of technology planning and roadmapping. Competitive-based figures of merit are used in this context to benchmark the evolution of technology in an industrial sector while accounting for competitive forces, using a game-theoretical approach. The automotive industry is studied as a case of a highly competitive commercial enterprise. By the application of several tools for the preliminary analysis of the competition level, the competitiondriven FOMs are distinguished

    System of Systems Stakeholder Planning in a Multi-Stakeholder, Multi-Objective, and Uncertain Environment

    Get PDF
    The United States defense planning process is currently conducted in a partially consolidated manner driven by the Joint Capabilities Integration and Development System (JCIDS) process. Decisions to invest in technology, develop systems, and acquire assets are made by individual services with coordination at the higher joint level. These individual service’s decisions are made in an environment where resource allocation and need are influenced by external stakeholders (e.g. shared system development costs, additional levied requirements, and complementary system development). The future outcome of any given decision is subject to a high degree of uncertainty stemming from both the stakeholder execution of a decision and the environment in which that execution will take place. Uncertainty in execution stems from TRL advancement, development timelines, acquisition timelines, and final deployed performance. Environmental uncertainty factors include future stakeholder resource availability, the future threat environment, cooperative stakeholder decisions, and mirrored adversary decisions. The defense planning problem can be described as an acknowledged System of Systems (SoS) planning problem. Today, methodologies exist that individually address SoS Engineering processes, the evaluation of SoS performance, and SoS system deterministic evolution. However, few approaches holistically address the SoS planning and evolution problem at the level needed to assist individual defense stakeholders in strategic planning. Current approaches do not address the impact of multiple-stakeholder decisions, multiple goals for each stakeholder, the uncertainty of decision outcomes, and the temporal component to strategic decision making. This thesis develops and tests a methodology to address defense stakeholder planning in a multi-stakeholder, multi-objective, and uncertain environment. First, a decision space is populated and captured via sampling a game framework that represents multiple stakeholder decisions as well as decision outcomes over time. A compressed Markov Decision Process (MDP) based meta-model is constructed using state-space consolidation techniques. The meta-model is evaluated using a risk-based policy development algorithm derived from combining traditional Reinforcement Learning (RL) techniques with mean-variance portfolio theory. Policy sensitivity to stakeholder risk-tolerance levels is used to develop state-based risk-tolerance sensitivity profiles and identify Pareto efficient actions. The risk-tolerance sensitivity profiles are used to evaluate both state spaces and decision spaces to provide stakeholders with risk-based insights, or rule sets, to support immediate decision making and risk-based stakeholder playbook development. The capability of the risk-based policy algorithm is tested using both elementary and complex scenarios. It is demonstrated that the algorithm can be used to extract Pareto efficient decisions as a function of risk-tolerance. The state space compression is tested via the comparison of the loss of information between the risk-based policy solutions for uncompressed and compressed state space. The full methodology is then demonstrated using a full-complexity scenario based on the joint development by France, Germany, and Spain of the SoS based Future Combat Air System (FCAS). The full complexity scenario is used to baseline the risk-based methodology against current optimal policy solution techniques. A significant increase in resulting derived insights relative to optimal policy solutions in a high uncertainty scenario is demonstrated.Ph.D

    CONCURRENT MULTI-PART MULTI-EVENT DESIGN REFRESH PLANNING MODELS INCORPORATING SOLUTION REQUIREMENTS AND PART-UNIQUE TEMPORAL CONSTRAINTS

    Get PDF
    Technology obsolescence, also known as DMSMS (Diminishing Manufacturing Sources and Material Shortages), is a significant problem for systems whose operational life is much longer than the procurement lifetimes of their constitute components. The most severely affected systems are sustainment-dominated, which means their long-term sustainment (life-cycle) costs significantly exceed the procurement cost for the system. Unlike high-volume commercial products, these sustainment-dominated systems may require design refreshes to simply remain manufacturable and supportable. A strategic method for reducing the life-cycle cost impact of DMSMS is called refresh planning. The goal of refresh planning is to determine when design refreshes should occur (or what the frequency of refreshes should be) and how to manage the system components that are obsolete or soon to be obsolete at the design refreshes. Existing strategic management approaches focus on methods for determining design refresh dates. While creating a set of feasible design refresh plans is achievable using existing design refresh planning methodologies, the generated refresh plans may not satisfy the needs of the designers (sustainers and customers) because they do not conform to the constraints imposed on the system. This dissertation develops a new refresh planning model that satisfies refresh structure requirements (i.e., requirements that constrain the form of the refresh plan to be periodic) and develops and presents the definition, generalization, synthesis and application of part-unique temporal constraints in the design refresh planning process for systems impacted by DMSMS-type obsolescence. Periodic refresh plans are required by applications that are refresh deployment constrained such as ships and submarines (e.g., only a finite number of dry docks are available to refresh systems). The new refresh planning model developed in this dissertation requires 50% less data and runs 50% faster than the existing state-of-the-art discrete event simulation solutions for problems where a periodic refresh solution is required

    Opportunity Identification for New Product Planning: Ontological Semantic Patent Classification

    Get PDF
    Intelligence tools have been developed and applied widely in many different areas in engineering, business and management. Many commercialized tools for business intelligence are available in the market. However, no practically useful tools for technology intelligence are available at this time, and very little academic research in technology intelligence methods has been conducted to date. Patent databases are the most important data source for technology intelligence tools, but patents inherently contain unstructured data. Consequently, extracting text data from patent databases, converting that data to meaningful information and generating useful knowledge from this information become complex tasks. These tasks are currently being performed very ineffectively, inefficiently and unreliably by human experts. This deficiency is particularly vexing in product planning, where awareness of market needs and technological capabilities is critical for identifying opportunities for new products and services. Total nescience of the text of patents, as well as inadequate, unreliable and untimely knowledge derived from these patents, may consequently result in missed opportunities that could lead to severe competitive disadvantage and potentially catastrophic loss of revenue. The research performed in this dissertation tries to correct the abovementioned deficiency with an approach called patent mining. The research is conducted at Finex, an iron casting company that produces traditional kitchen skillets. To \u27mine\u27 pertinent patents, experts in new product development at Finex modeled one ontology for the required product features and another for the attributes of requisite metallurgical enabling technologies from which new product opportunities for skillets are identified by applying natural language processing, information retrieval, and machine learning (classification) to the text of patents in the USPTO database. Three main scenarios are examined in my research. Regular classification (RC) relies on keywords that are extracted directly from a group of USPTO patents. Ontological classification (OC) relies on keywords that result from an ontology developed by Finex experts, which is evaluated and improved by a panel of external experts. Ontological semantic classification (OSC) uses these ontological keywords and their synonyms, which are extracted from the WordNet database. For each scenario, I evaluate the performance of three classifiers: k-Nearest Neighbor (k-NN), random forest, and Support Vector Machine (SVM). My research shows that OSC is the best scenario and SVM is the best classifier for identifying product planning opportunities, because this combination yields the highest score in metrics that are generally used to measure classification performance in machine learning (e.g., ROC-AUC and F-score). My method also significantly outperforms current practice, because I demonstrate in an experiment that neither the experts at Finex nor the panel of external experts are able to search for and judge relevant patents with any degree of effectiveness, efficiency or reliability. This dissertation provides the rudiments of a theoretical foundation for patent mining, which has yielded a machine learning method that is deployed successfully in a new product planning setting (Finex). Further development of this method could make a significant contribution to management practice by identifying opportunities for new product development that have been missed by the approaches that have been deployed to date

    A methodology for probabilistic aircraft technology assessment and selection under uncertainty

    Get PDF
    The high degree of complexity and uncertainty associated with aerospace engineering applications has driven designers and engineers towards the use of probabilistic and statistical analysis tools in order to understand and design for that uncertainty. As a result, probabilistic methods have permeated the aerospace field to the extent that single point deterministic designs are no longer credible, particularly in systems analysis, performance assessment, technology impact quantification, etc. However, as statistics theory is not the primary focus of most aerospace practitioners, incorrect assumptions and flawed methods are often unknowingly used in design. A common assumption of probabilistic assessments in the field of aerospace is the independence of random variables. These random variables represent design variables, noise variables, technology impacts, etc., which can be difficult to correlate but do have underlying relationships. The justification for the assumed independence is usually not discussed in the literature even though this can have a substantial effect on probabilistic assessment and uncertainty quantification results. In other cases the dependence between random variables is acknowledged but intentionally ignored on the basis of difficulty in characterizing underlying random variable relationships, a strong bias towards methodological simplicity and low computational expense, and the expectation of modest strength in random variable dependence. Probabilistic assessments also yield large amounts of data which is not effectively used due to the sheer volume of data and poor traceability to the drivers of uncertainty. The literature shows optimization techniques are resorted to in order to select from competing alternatives in multiobjective spaces, however, these techniques generally do not handle uncertainty well. The motivating question is, how can improvements be made to the probabilistic assessment process for aircraft technology assessments that capture technology impact tradeoffs and dependencies, and ultimately enable decision makers to make an axiomatic and rational selection under uncertainty? This question leads to the research objective of this work which is to develop a methodology ``to quantify and characterize aviation's environmental impact, uncertainties, and the trade-offs and interdependencies among various impacts'' \cite{Council2010}, in order to assess and select future aircraft technologies. Copula theory is suggested to address the problem of assumed independence on the input side of probabilistic assessments in aerospace applications. Copulas are functions that can be used to define probabilistic relationships between random variables. They are well documented in the literature and have been used in many fields such as the statistics, finance, and insurance industries. They can be used to quantify complex relationships, even if that is only qualitatively or notionally understood. In this way a designer's knowledge regarding uncertainty can be better represented and propagated to system level metrics through the probabilistic assessment. Utility theory is proposed as a solution to the challenge of effectively using output data from probabilistic assessments. Utility theory is a powerful tool used in economics, marketing, psychiatry, etc., to express preferences among competing alternatives. Utility theory can provide combined valuation to each alternative in a multiobjective design space while incorporating the uncertainty associated with each alternative. This can enable designers to rationally and axiomatically make selections consistent with their preferences, between complex solutions with varying degrees of uncertainty. This work provides an introduction to copula and utility theories for the aerospace audience. It also demonstrates how these theories can be applied in canonical problems to bridge gaps currently found in the literature with regards to probabilistic assessments of aircraft technologies. The key contributions of this research are (1) an Archimedean copula selection tree enabling practitioners to rapidly translate their qualitative understanding of dependence into copula families that can represent it quantitatively (2) estimation of the quantified effect of using copulas to capture probabilistic dependence in three representative aerospace applications (3) an expected utility formulation for axiomatically ranking and selecting aircraft technology packages under uncertainty and (4) a strategic elicitation procedure for multiattribute utility functions that does not need assumptions of independence conditions on preferences between the attributes. The proposed FAAST methodology is shown as an encompassing framework for the aircraft technology assessment and selection problem that fills capability gaps from the literature and supports the decision maker in a rational and axiomatic manner.Ph.D

    Artificial Superintelligence: Coordination & Strategy

    Get PDF
    Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility
    corecore