2,975 research outputs found

    Computational Tradespace Exploration, Analysis, and Decision-Making: A Proposed Framework for Organizational Self-Assessment

    Get PDF
    The ability to assess technical feasibility, project risk, technical readiness, and realistic performance expectations in early-phase conceptual design is a challenging mission-critical task for large procurement projects. At present, there is not a well-defined framework for evaluating current practices of organizations performing computational trade studies. One such organization is the US Army Ground Vehicle Systems Center (GVSC). When defining requirements and priorities for the next-generation autonomy-enabled ground vehicle system, GVSC is faced with the challenge of an increasingly complex programmatic tradespace due to emerging complexities of ground vehicle systems. This thesis aims to document and evaluate tradespace processes, methods, and tools within GVSC. A systematic review of the literature was conducted to investigate existing gaps, limitations, and potential growth opportunities related to tradespace activities reflecting the greater body of knowledge observed in the literature. Following this review, an interview-based study was developed through which a series of interviews with GVSC personnel was conducted and subsequently benchmarked against the baseline established in the literature. In addition to characterizing the current practices of tradespace exploration and analysis within GVSC, the analysis of the collected interview data revealed current capability gaps, areas of excellence, and potential avenues for improvement within GVSC. Through this thesis, other organizations can perform similar self-assessments to improve internal capabilities with respect to tradespace studies

    Framework for engineering design systems architectures evaluation and selection: case study

    Get PDF
    Engineering companies face the challenge of developing complex Engineering Design Systems. These systems involve huge financial, people, and time investments within an environment that is characterised by continuously changing technologies and processes. Systems architecture provides the strategies and modelling approaches to ensure that adequate resources are spent in developing the possible To Be states for a target system. Architecture selection and evaluation involves evaluating different architectural alternatives with respect to multiple criteria, hence an Architecture Evaluation Framework which evaluates and down selects the appropriate architectures solutions is crucial to assess how these systems will deliver value over their lifetime, and where to channel the financial and human investments to maximize benefit delivered to the business’ bottom line. In this paper, an evaluation and selection architecture framework is proposed, which targets to maximise the alignment of Engineering Design Systems with business goals based on a quality centric architecture evaluation approach. The framework utilised software Quality Attributes as well as SWOT (Strength, Weakness, Opportunity, Threat) and PEST (Political, Economic, Social, Technological) analyses to capture different viewpoints related to technical, political and business context. The framework proposed employing AHP (Analytical Hierarchy Process) to quantitatively elicit relationships between Quality Attributes trade-offs and architectural characteristics. The framework was applied to a real case study considering five Engineering Design Systems alternative architectures, where workshops with subject matter experts and stakeholders were held to reach an informative decision, that maximise architectural quality, whilst maintaining business alignment

    Multi-Criteria Decision Making in software development:a systematic literature review

    Get PDF
    Abstract. Multiple Criteria Decision Making is a formal approach to assist decision makers to select the best solutions among multiple alternatives by assessing criteria which are relatively precise but generally conflicting. The utilization of MCDM are quite popular and common in software development process. In this study, a systematic literature review which includes creating review protocol, selecting primary study, making classification schema, extracting data and other relevant steps was conducted. The objective of this study are making a summary about the state-of-the-art of MCDM in software development process and identifying the MCDM methods and MCDM problems in software development by systematically structuring and analyzing the literature on those issues. A total of 56 primary studies were identified after the review, and 33 types of MCDM methods were extracted from those primary studies. Among them, AHP was defined as the most frequent used MCDM methods in software development process by ranking the number of primary studies which applied it in their studies, and Pareto optimization was ranked in the second place. Meanwhile, 33 types of software development problems were identified. Components selection, design concepts selection and performance evaluation became the three most frequent occurred problems which need to be resolved by MCDM methods. Most of those MCDM problems were found in software design phase. There were many limitations to affect the quality of this study; however, the strictly-followed procedures of SLR and mass data from thousands of literature can still ensure the validity of this study, and this study is also able to provide the references when decision makers want to select the appropriate technique to cope with the MCDM problems

    -ilities Tradespace and Affordability Project – Phase 3

    Get PDF
    One of the key elements of the SERC’s research strategy is transforming the practice of systems engineering and associated management practices – “SE and Management Transformation (SEMT).” The Grand Challenge goal for SEMT is to transform the DoD community’s current systems engineering and management methods, processes, and tools (MPTs) and practices away from sequential, single stovepipe system, hardware-first, document-driven, point- solution, acquisition-oriented approaches; and toward concurrent, portfolio and enterprise- oriented, hardware-software-human engineered, model-driven, set-based, full life cycle approaches.This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046).This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046)

    A Game of Attribute Decomposition for Software Architecture Design

    Full text link
    Attribute-driven software architecture design aims to provide decision support by taking into account the quality attributes of softwares. A central question in this process is: What architecture design best fulfills the desirable software requirements? To answer this question, a system designer needs to make tradeoffs among several potentially conflicting quality attributes. Such decisions are normally ad-hoc and rely heavily on experiences. We propose a mathematical approach to tackle this problem. Game theory naturally provides the basic language: Players represent requirements, and strategies involve setting up coalitions among the players. In this way we propose a novel model, called decomposition game, for attribute-driven design. We present its solution concept based on the notion of cohesion and expansion-freedom and prove that a solution always exists. We then investigate the computational complexity of obtaining a solution. The game model and the algorithms may serve as a general framework for providing useful guidance for software architecture design. We present our results through running examples and a case study on a real-life software project.Comment: 23 pages, 5 figures, a shorter version to appear at 12th International Colloquium on Theoretical Aspects of Computing (ICTAC 2015

    Case study in the selection of warehouse location for WFP in Ethiopia

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2009.Includes bibliographical references (leaves 96-99).Humanitarian logistic organizations struggle to make strategic and tactical decisions due to their lack of resources, the unpredictability of humanitarian events and the lack of readily available information; the existing tools that assist optimal decision making require large amounts of precise information. As a consequence of all these challenges, most of the work in humanitarian logistics concentrates on the operational level that can only offer short term benefits. Alternatively, optimal strategic decisions maximize the resources of humanitarian organizations making them more flexible and effective in the long term; this directly impacts the ability to help the millions of people in need. This thesis presents a model that assists the largest humanitarian organization in the world, The World Food Programme, to make optimal strategic decisions. The model uses the Analytic Hierarchy Process, a multiple attribute decision tool that provides structure to decisions where there is limited availability of quantitative information. This methodology uses a framework that determines and prioritizes multiple criteria by using qualitative data and it scores each alternative based on these criteria. The optimal alternative will be the one that has the highest weighted score. This model solves the challenges that The World Food Programme, as any other humanitarian organization face when making complex strategic decisions. The model, not only works with easily acquired information but, it is also flexible in order to consider the ever-changing dynamics in the humanitarian field.(cont.) The application of this model focuses on the optimization of warehouse locations for the World Food Programme in the Somali region of Ethiopia. However, this model can easily be scaled in order to be used in any other decision making process in the humanitarian field.by Gina Malaver [and] Colin Regnier.M.Eng.in Logistic

    BigDataBench: a Big Data Benchmark Suite from Internet Services

    Full text link
    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. BigDataBench is publicly available from http://prof.ict.ac.cn/BigDataBench . Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache misses per 1000 instructions of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.Comment: 12 pages, 6 figures, The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, US

    Decision Making Analysis of Video Streaming Algorithm for Private Cloud Computing Infrastructure

    Get PDF
    The issue on how to effectively deliver video streaming contents over cloud computing infrastructures is tackled in this study. Basically, quality of service of video streaming is strongly influenced by bandwidth, jitter and data loss problems. A number of intelligent video streaming algorithms are proposed by using different techniques to deal with such issues. This study aims to propose and demonstrate a novel decision making analysis which combines ISO 9126 (international standard for software engineering) and Analytic Hierarchy Process to help experts selecting the best video streaming algorithm for the case of private cloud computing infrastructure. The given case study concluded that Scalable Streaming algorithm is the best algorithm to be implemented for delivering high quality of service of video streaming over  the private cloud computing infrastructure

    Near-Memory Address Translation

    Full text link
    Memory and logic integration on the same chip is becoming increasingly cost effective, creating the opportunity to offload data-intensive functionality to processing units placed inside memory chips. The introduction of memory-side processing units (MPUs) into conventional systems faces virtual memory as the first big showstopper: without efficient hardware support for address translation MPUs have highly limited applicability. Unfortunately, conventional translation mechanisms fall short of providing fast translations as contemporary memories exceed the reach of TLBs, making expensive page walks common. In this paper, we are the first to show that the historically important flexibility to map any virtual page to any page frame is unnecessary in today's servers. We find that while limiting the associativity of the virtual-to-physical mapping incurs no penalty, it can break the translate-then-fetch serialization if combined with careful data placement in the MPU's memory, allowing for translation and data fetch to proceed independently and in parallel. We propose the Distributed Inverted Page Table (DIPTA), a near-memory structure in which the smallest memory partition keeps the translation information for its data share, ensuring that the translation completes together with the data fetch. DIPTA completely eliminates the performance overhead of translation, achieving speedups of up to 3.81x and 2.13x over conventional translation using 4KB and 1GB pages respectively.Comment: 15 pages, 9 figure
    • …
    corecore