524,406 research outputs found

    DETC2006-99308 A CLASSIFICATION FRAMEWORK FOR PRODUCT DESIGN OPTIMIZATION

    Get PDF
    ABSTRACT Research on design optimization has developed and demonstrated a variety of modeling techniques and solution methods, including techniques for multidisciplinary design optimization, and these approaches are beginning to migrate into product development practice. Software tools are appearing to assist with the optimization task. However, the complexity of the optimization problems being considered continues to increase because changing business strategies stress the importance of concurrent engineering and considering multiple disciplines simultaneously. This paper presents a novel classification framework for design optimization problems. The framework sorts design optimization problems based on the type of variables being considered and the objective functions being optimized. It does not focus on the algorithms used to solve the problems. This classification framework provides a new perspective that can help design engineers use optimization in the most appropriate way

    A holistic method for improving software product and process quality

    Get PDF
    The concept of quality in general is elusive, multi-faceted and is perceived differently by different stakeholders. Quality is difficult to define and extremely difficult to measure. Deficient software systems regularly result in failures which often lead to significant financial losses but more importantly to loss of human lives. Such systems need to be either scrapped and replaced by new ones or corrected/improved through maintenance. One of the most serious challenges is how to deal with legacy systems which, even when not failing, inevitably require upgrades, maintenance and improvement because of malfunctioning or changing requirements, or because of changing technologies, languages, or platforms. In such cases, the dilemma is whether to develop solutions from scratch or to re-engineer a legacy system. This research addresses this dilemma and seeks to establish a rigorous method for the derivation of indicators which, together with management criteria, can help decide whether restructuring of legacy systems is advisable. At the same time as the software engineering community has been moving from corrective methods to preventive methods, concentrating not only on both product quality improvement and process quality improvement has become imperative. This research investigation combines Product Quality Improvement, primarily through the re-engineering of legacy systems; and Process Improvement methods, models and practices, and uses a holistic approach to study the interplay of Product and Process Improvement. The re-engineering factor rho, a composite metric was proposed and validated. The design and execution of formal experiments tested hypotheses on the relationship of internal (code-based) and external (behavioural) metrics. In addition to proving the hypotheses, the insights gained on logistics challenges resulted in the development of a framework for the design and execution of controlled experiments in Software Engineering. The next part of the research resulted in the development of the novel, generic and, hence, customisable Quality Model GEQUAMO, which observes the principle of orthogonality, and combines a top-down analysis of the identification, classification and visualisation of software quality characteristics, and a bottom-up method for measurement and evaluation. GEQUAMO II addressed weaknesses that were identified during various GEQUAMO implementations and expert validation by academics and practitioners. Further work on Process Improvement investigated the Process Maturity and its relationship to Knowledge Sharing, resulted in the development of the I5P Visualisation Framework for Performance Estimation through the Alignment of Process Maturity and Knowledge Sharing. I5P was used in industry and was validated by experts from academia and industry. Using the principles that guided the creation of the GEQUAMO model, the CoFeD visualisation framework, was developed for comparative quality evaluation and selection of methods, tools, models and other software artifacts. CoFeD is very useful as the selection of wrong methods, tools or even personnel is detrimental to the survival and success of projects and organisations, and even to individuals. Finally, throughout the many years of research and teaching Software Engineering, Information Systems, Methodologies, I observed the ambiguities of terminology and the use of one term to mean different concepts and one concept to be expressed in different terms. These practices result in lack of clarity. Thus my final contribution comes in my reflections on terminology disambiguation for the achievement of clarity, and the development of a framework for achieving disambiguation of terms as a necessary step towards gaining maturity and justifying the use of the term “Engineering” 50 years since the term Software Engineering was coined. This research resulted in the creation of new knowledge in the form of novel indicators, models and frameworks which can aid quantification and decision making primarily on re-engineering of legacy code and on the management of process and its improvement. The thesis also contributes to the broader debate and understanding of problems relating to Software Quality, and establishes the need for a holistic approach to software quality improvement from both the product and the process perspectives

    Финансирование «зеленых» проектов: особенности, риски и инструменты

    Get PDF
    The subject of the paper is the “green” projects of companies whose production activities are accompanied by a high level of anthropogenic emissions. The purpose of the paper is to study the features of analysis and practical application of the tools for financing “green” projects (hereinafter referred to as the tools). The relevance of the article is determined by the need to solve the problems of implementing in practice the provisions of Russian legislation on the development of “green” economy in the context of the need to develop and finance “green” projects by members of the National ESG Alliance. The scientific novelty of the paper is to develop the theory of development and practical use of tools, taking into account the peculiarities of their analysis and application. The paper uses theoretical and practical methods to the analysis of scientific publications and simulation results. The research is based on the provisions of normative and legal acts, monographs and scientific works devoted to the analysis, development and financing of “green” projects. Based on the research carried out in the article, the following results were obtained: an analysis was made of the specifics of the requirements for financing “green” projects; clarified the features of the classification of climate risks and formulated an approach to their transformation into corporate credit risks; the composition of the instruments is determined and their interpretation as controlled aggregates is proposed; the operator model of the units was developed, proposals for its practical use were made. The authors recommend that companies with a commodity product range use the operator model and cognitive maps developed on its basis to analyze existing and develop new tools. In the future, “green” companies are encouraged to use the tools obtained on the basis of the operator model and cognitive maps.Предмет исследования — «зеленые» проекты компаний, производственная деятельность которых сопровождается высоким уровнем антропогенных выбросов. Цель работы — исследование особенностей анализа и применения на практике инструментов финансирования «зеленых» проектов (далее — инструменты). Актуальность статьи определяется необходимостью решения задач по внедрению на практике положений российского законодательства о развитии «зеленой» экономики, в том числе — в контексте необходимости разработки и финансирования «зеленых» проектов участниками Национального ESG-альянса. Научная новизна состоит в развитии авторами теории разработки и практического использования инструментов с учетом особенностей их анализа и применения. Использованы методы теоретического и практического анализа научных публикаций и результатов моделирования. Исследование основано на положениях нормативных и правовых актов, монографиях и научных трудах, посвященных финансированию «зеленых» проектов. Проанализирована специфика требований к финансированию «зеленых» проектов; уточнены особенности классификации климатических рисков и сформулирован подход к их трансформации в корпоративные кредитные риски; определен состав инструментов и предложена их интерпретация как управляемых агрегатов; разработана операторная модель агрегатов, представлены предложения по ее практическому использованию. Авторы рекомендуют компаниям с сырьевой номенклатурой продукции использовать операторную модель и разработанные на ее основе когнитивные карты для анализа существующих и разработки новых инструментов. В перспективе «зеленым» компаниям предлагается использовать инструменты, полученные на основе операторной модели и когнитивных карт

    Optimization of stand-alone photovoltaic system by implementing fuzzy logic MPPT controller

    Get PDF
    A photovoltaic (PV) generator is a nonlinear device having insolation-dependent volt-ampere characteristics. Since the maximum-power point varies with solar insolation, it is difficult to achieve an optimum matching that is valid for all insolation levels. Thus, Maximum power point tracking (MPPT) plays an important roles in photovoltaic (PV) power systems because it maximize the power output from a PV system for a given set of condition, and therefore maximize their array efficiency. This project presents a maximum power point tracker (MPPT) using Fuzzy Logic theory for a PV system. The work is focused on a comparative study between most conventional controller namely Perturb and Observe (P&O) algorithm and is compared to a design fuzzy logic controller (FLC). The introduction of fuzzy controller has given very good performance on whatever the parametric variation of the system

    Identifying smart design attributes for Industry 4.0 customization using a clustering Genetic Algorithm

    Get PDF
    Industry 4.0 aims at achieving mass customization at a mass production cost. A key component to realizing this is accurate prediction of customer needs and wants, which is however a challenging issue due to the lack of smart analytics tools. This paper investigates this issue in depth and then develops a predictive analytic framework for integrating cloud computing, big data analysis, business informatics, communication technologies, and digital industrial production systems. Computational intelligence in the form of a cluster k-means approach is used to manage relevant big data for feeding potential customer needs and wants to smart designs for targeted productivity and customized mass production. The identification of patterns from big data is achieved with cluster k-means and with the selection of optimal attributes using genetic algorithms. A car customization case study shows how it may be applied and where to assign new clusters with growing knowledge of customer needs and wants. This approach offer a number of features suitable to smart design in realizing Industry 4.0

    A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

    Get PDF
    Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process

    Measuring Software Process: A Systematic Mapping Study

    Get PDF
    Context: Measurement is essential to reach predictable performance and high capability processes. It provides support for better understanding, evaluation, management, and control of the development process and project, as well as the resulting product. It also enables organizations to improve and predict its process’s performance, which places organizations in better positions to make appropriate decisions. Objective: This study aims to understand the measurement of the software development process, to identify studies, create a classification scheme based on the identified studies, and then to map such studies into the scheme to answer the research questions. Method: Systematic mapping is the selected research methodology for this study. Results: A total of 462 studies are included and classified into four topics with respect to their focus and into three groups based on the publishing date. Five abstractions and 64 attributes were identified, 25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the most measured process attributes, while effort and performance were the most measured project attributes. Goal Question Metric and Capability Maturity Model Integration were the main methods and models used in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently identified research contexts.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2016-76956-C3-2- RMinisterio de Economía y Competitividad TIN2015-71938-RED
    corecore