668 research outputs found

    Integration of multi lifecycle assessment and design for environment database using relational moddel concepts

    Get PDF
    Multi-lifecycle Assessment (MLCA) systematically considers and quantifies the consumption of resources and the environmental impact associated with a product or process. Design challenges posed by a multi-lifecycle strategy are significantly more complex than traditional product design. The designer must look forward in time to maximize the product\u27s end-of-life yield of assemblies, parts and materials while looking backward to the world of existing products for feedstock sources for the current design. As MLCA and DEE share some common data items, such as, part geometry, material and manufacturing process, it is advantageous to integrate the database for MLCA and DEE. The integration of CAD/DEE and MLCA database will provide not only to designers but also for dernanufacturer and MLCA analyst to play an active role in achieving the vision of sustainability. The user of MLCA software has to provide a significant amount of information manually about a product for which the environmental burdens are being analyzed, which is an error prone activity. To avoid the manual work and associative problems, a MLCA-CAD interface has been developed to progranmtatically populate the MLCA database by using the Bill of Material (BOM) information in the CAD software. This MLCA-CAD interface provides a flow of information from design software (DEE/CAD) to MLCA software

    Extraction of objects from legacy systems: an example using cobol legacy systems

    Get PDF
    In the last few years the interest in legacy information system has increased because of the escalating resources spent on their maintenance. On the other hand, the importance of extracting knowledge from business rules is becoming a crucial issue for modern business: sometime, because of inappropriate documentation, this knowledge is essentially only stored in the code. A way to improve their use and maintainability in the present environment is to migrate them into a new hardware / software platform reusing as much of their experience as possible during this process. This migration process promotes the population of a repository of reusable software components for their reuse in the development of a new system in that application domain or in the later maintenance processes. The actual trend in the migration of a legacy information system, is to exploit the potentialities of object oriented technology as a natural extension of earlier structured programming techniques. This is done by decomposing the program into several agent-like modules communicating via message passing, and providing to this system some object oriented key features. The key step is the "object isolation", i.e. the isolation of .groups of routines and related data items : to candidates in order to implement an abstraction in the application domain. The main idea of the object isolation method presented here is to extract information from the data flow, to cluster all the procedures on the base of their data accesses. It will examine "how" a procedure accesses the data in order to distinguish several types of accesses and to permit a better understanding of the functionality of the candidate objects. These candidate modules support the population of a repository of reusable software components that might be used as a basis of the process of evolution leading to a new object oriented system reusing the extracted objects

    Quality function deployment opportunities in product model supported design

    Get PDF
    This thesis describes the development of a QFD information model established in an environment where design information is shared between software applications. The main objectives of the research are to establish a QFD information structure within a Product Data Model and to demonstrate how this enables an intelligent, knowledgebased analysis of QFD information contained in a Product Model. The generic structure of the QFD information has been defined and implemented in prototype software and its value is demonstrated through experimentation in two case studies. Successful implementation of the case studies proved that the QFD information structure is able to capture QFD information as persistent objects residing in a Product Model. It also demonstrates that an intelligent knowledge-based QFD expert can be implemented alongside the QFD information model to accomplish useful, consistent, reasoned analysis of QFD information. The research has achieved its aim to provide a new contribution in the product design domain, and to the effectiveness of Concurrent Engineering activities, through better use of Quality Function Deployment

    Collected software engineering papers, volume 8

    Get PDF
    A collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) during the period November 1989 through October 1990 is presented. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. Additional information about the SEL and its research efforts may be obtained from the sources listed in the bibliography. The seven presented papers are grouped into four major categories: (1) experimental research and evaluation of software measurement; (2) studies on models for software reuse; (3) a software tool evaluation; and (4) Ada technology and studies in the areas of reuse and specification

    An approach to enacting business process models in support of the life cycle of integrated manufacturing systems

    Get PDF
    The complexity of enterprise engineering processes requires the application of reference architectures as means of guiding the achievement of an adequate level of business integration. This research aims to address important aspects of this requirement by associating the formalism of reference architectures to various life cycle phases of integrating manufacturing systems (IMS) and enabling their use in addressing contemporary system engineering issues. In pursuit of this aim, the following research activities were carried out: (1) to devise a framework which supports key phases of the IMS life cycle and (2) to populate part of this framework with an initial combination of architectures which can be encapsulated into a computer-aided systems engineering environment. This has led to the creation of a workbench capable of providing support for modelling, analysis, simulation, rapid-prototyping, configuration and run-time operation of an IMS, based on a consistent set of models associated with the engineering processes involved. The research effort concentrated on selecting and investigating the use of appropriate formalisms which underpin a selection of architectures and tools (i. e. CIM-OSA, Petrinets, object-oriented methods and CIM-BIOSYS), this by designing, implementing, applying and testing the workbench. The main contribution of this research is to demonstrate that it is possible to retain an adequate level of formalism, via computational structures and models, which extend through the IMS life cycle from a conceptual description of the system through to actions that the system performs when operating. The underlying methodology which supported this contribution is based on enacting models of system behaviour which encode important coordination aspects of manufacturing systems. The strategy for demonstrating the incorporation of formalism to the IMS life cycle was to enable the aggregation into a workbench of knowledge of 'what' the system is expected to achieve (i. e. 'problems' to be addressed) and 'how' the system can achieve it (i. e possible 'solutions'). Within the workbench, such a knowledge is represented through an amalgamation of business process modelling and object-oriented modelling approaches which, when adequately manipulated, can lead to business integration

    Discovering data lineage in data warehouse : methods and techniques for tracing the origins of data in data-warehouse

    Get PDF
    A data warehouse enables enterprise-wide analysis and reporting functionality that is usually used to support decision-making. Data warehousing system integrates data from different data sources. Typically, the data are extracted from different data sources, then transformed several times and integrated before they are finally stored in the central repository. The extraction and transformation processes vary widely - both in theory and between solution providers. Some are generic, others are tailored to users' transformation and reporting requirements through hand-coded solutions. Most research related to data integration is focused on this area, i.e., on the transformation of data. Since data in a data warehouse undergo various complex transformation processes, often at many different levels and in many stages, it is very important to be able to ensure the quality of the data that the data warehouse contains. The objective of this thesis is to study and compare existing approaches (methods and techniques) for tracing data lineage, and to propose a data lineage solution specific to a business enterprise data warehouse

    Knowledge composition methodology for effective analysis problem formulation in simulation-based design

    Get PDF
    In simulation-based design, a key challenge is to formulate and solve analysis problems efficiently to evaluate a large variety of design alternatives. The solution of analysis problems has benefited from advancements in commercial off-the-shelf math solvers and computational capabilities. However, the formulation of analysis problems is often a costly and laborious process. Traditional simulation templates used for representing analysis problems are typically brittle with respect to variations in artifact topology and the idealization decisions taken by analysts. These templates often require manual updates and "re-wiring" of the analysis knowledge embodied in them. This makes the use of traditional simulation templates ineffective for multi-disciplinary design and optimization problems. Based on these issues, this dissertation defines a special class of problems known as variable topology multi-body (VTMB) problems that characterizes the types of variations seen in design-analysis interoperability. This research thus primarily answers the following question: How can we improve the effectiveness of the analysis problem formulation process for VTMB problems? The knowledge composition methodology (KCM) presented in this dissertation answers this question by addressing the following research gaps: (1) the lack of formalization of the knowledge used by analysts in formulating simulation templates, and (2) the inability to leverage this knowledge to define model composition methods for formulating simulation templates. KCM overcomes these gaps by providing: (1) formal representation of analysis knowledge as modular, reusable, analyst-intelligible building blocks, (2) graph transformation-based methods to automatically compose simulation templates from these building blocks based on analyst idealization decisions, and (3) meta-models for representing advanced simulation templates VTMB design models, analysis models, and the idealization relationships between them. Applications of the KCM to thermo-mechanical analysis of multi-stratum printed wiring boards and multi-component chip packages demonstrate its effectiveness handling VTMB and idealization variations with significantly enhanced formulation efficiency (from several hours in existing methods to few minutes). In addition to enhancing the effectiveness of analysis problem formulation, KCM is envisioned to provide a foundational approach to model formulation for generalized variable topology problems.Ph.D.Committee Co-Chair: Dr. Christiaan J. J. Paredis; Committee Co-Chair: Dr. Russell S. Peak; Committee Member: Dr. Charles Eastman; Committee Member: Dr. David McDowell; Committee Member: Dr. David Rosen; Committee Member: Dr. Steven J. Fenve

    Formal representation of ambulatory assessment protocols in HTML5 for human readability and computer execution

    Get PDF
    Ambulatory assessment (AA) is a research method that aims to collect longitudinal biopsychosocial data in groups of individuals. AA studies are commonly conducted via mobile devices such as smartphones. Researchers tend to communicate their AA protocols to the community in natural language by describing step-by-step procedures operating on a set of materials. However, natural language requires effort to transcribe onto and from the software systems used for data collection, and may be ambiguous, thereby making it harder to reproduce a study. Though AA protocols may also be written as code in a programming language, most programming languages are not easily read by most researchers. Thus, the quality of scientific discourse on AA stands to gain from protocol descriptions that are easy to read, yet remain formal and readily executable by computers. This paper makes the case for using the HyperText Markup Language (HTML) to achieve this. While HTML can suitably describe AA materials, it cannot describe AA procedures. To resolve this, and taking away lessons from previous efforts with protocol implementations in a system called TEMPEST, we offer a set of custom HTML5 elements that help treat HTML documents as executable programs that can both render AA materials, and effect AA procedures on computational platforms.</p
    corecore