1,628 research outputs found

    Product to process lifecycle management in assembly automation systems

    Get PDF
    Presently, the automotive industry is facing enormous pressure due to global competition and ever changing legislative, economic and customer demands. Product and process development in the automotive manufacturing industry is a challenging task for many reasons. Current product life cycle management (PLM) systems tend to be product-focussed. Though, information about processes and resources are there but mostly linked to the product. Process is an important aspect, especially in assembly automation systems that link products to their manufacturing resources. This paper presents a process-centric approach to improve PLM systems in large-scale manufacturing companies, especially in the powertrain sector of the automotive industry. The idea is to integrate the information related to key engineering chains i.e. products, processes and resources based upon PLM philosophy and shift the trend of product-focussed lifecycle management to process-focussed lifecycle management, the outcome of which is the Product, Process and Resource Lifecycle Management not PLM only

    The simulation of automated leading edge assembly

    Get PDF
    Aircraft manufacturers are experiencing a fierce competition worldwide. Improving productivity, increasing throughput and reducing costs are influencing aircraft manufacturer’s future development. In order to improve competitiveness and provide sufficient and high quality products, it should reduce operations of aircraft assembly,majority of which are still in manual process, which limit production output. In contrast, these processes can be automated to replace manual operations. Much more attention should be placed on automated application. This project aims to propose a methodology to develop the automated assembly based on robotics and use this methodology to develop a new concept of Automated Leading Edge Assembly. The research selects an automated assembly process for further evaluation and brackets assembled on the front spar of Leading Edge are chosen to be automated assembly with robot assistant. The software DELMIA is used to develop and simulate the automated assembly process of brackets based on 3-D virtual aircraft Leading Edge models. The research development is mainly divided into three phases which are: (1) The state of art on Manual Leading Edge Assembly; (2) Automated Leading Edge Assembly framework development; (3) Automated Leading Edge Assembly framework evaluation including automated assembly process simulation based on DELMIA robotics workbench and automated assembly cost estimation. The research has proposed a methodology to develop the automated assembly based on robotics, proposed a new concept of Automated Leading Edge Assembly: using robots to replace workers to finish the assembly applications in the Leading Edge, and proposed a new automated bracket assembly process with laser ablation, adhesive bonding, drilling, riveting, and robot application. These applications can attract more and more engineers’ attention and provide preliminary knowledge for further study and detail research in the future

    A knowledge based approach to integration of products, processes and reconfigurable automation resources

    Get PDF
    The success of next generation automotive companies will depend upon their ability to adapt to ever changing market trends thus becoming highly responsive. In the automotive sector, the assembly line design and reconfiguration is an especially critical and extremely complex job. The current research addresses some of the aspects of this activity under the umbrella of a larger ongoing research project called Business Driven Automation (BDA) project. The BDA project aims to carry out complete virtual 3D modeling-based verifications of the assembly line for new or revised products in contrast to the prevalent practice of manual evaluation of effects of product change on physical resources. [Continues.

    Product to process lifecycle management in assembly automation systems

    Get PDF
    Presently, the automotive industry is facing enormous pressure due to global competition and ever changing legislative, economic and customer demands. Product and process development in the automotive manufacturing industry is a challenging task for many reasons. Current product life cycle management (PLM) systems tend to be product-focussed. Though, information about processes and resources are there but mostly linked to the product. Process is an important aspect, especially in assembly automation systems that link products to their manufacturing resources. This paper presents a process-centric approach to improve PLM systems in large-scale manufacturing companies, especially in the powertrain sector of the automotive industry. The idea is to integrate the information related to key engineering chains i.e. products, processes and resources based upon PLM philosophy and shift the trend of product-focussed lifecycle management to process-focussed lifecycle management, the outcome of which is the Product, Process and Resource Lifecycle Management not PLM only

    Tax administration in developing countries : strategies and tools of implementation

    Get PDF
    Developing nations should adopt less sophisticated taxes (such as taxes on goods and services) to broaden the tax base and use more efficient administrative techniques. This could be achieved through a system of income withholdings (for all components of income) and through computerization. This could simplify withholding and collection by giving each taxpayer a number in a master file. Computers could also facilitate information gathering, cross checking, and audits. At present, potential tax bases are often not exploited because the application of existing laws is not possible. Because of this, administrators face major problems: a large portion of the economy is at a subsistence level and does not keep records. Where records are kept, accounting is not reliable. Taxpayer cooperation is also low for a variety of reasons: shortage of trained officials, a tradition of corruption, and because taxes are not often seen to produce better government services.Public Sector Economics&Finance,Environmental Economics&Policies,Banks&Banking Reform,Economic Theory&Research,Tax Policy and Administration

    Input variable selection in time-critical knowledge integration applications: A review, analysis, and recommendation paper

    Get PDF
    This is the post-print version of the final paper published in Advanced Engineering Informatics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.The purpose of this research is twofold: first, to undertake a thorough appraisal of existing Input Variable Selection (IVS) methods within the context of time-critical and computation resource-limited dimensionality reduction problems; second, to demonstrate improvements to, and the application of, a recently proposed time-critical sensitivity analysis method called EventTracker to an environment science industrial use-case, i.e., sub-surface drilling. Producing time-critical accurate knowledge about the state of a system (effect) under computational and data acquisition (cause) constraints is a major challenge, especially if the knowledge required is critical to the system operation where the safety of operators or integrity of costly equipment is at stake. Understanding and interpreting, a chain of interrelated events, predicted or unpredicted, that may or may not result in a specific state of the system, is the core challenge of this research. The main objective is then to identify which set of input data signals has a significant impact on the set of system state information (i.e. output). Through a cause-effect analysis technique, the proposed technique supports the filtering of unsolicited data that can otherwise clog up the communication and computational capabilities of a standard supervisory control and data acquisition system. The paper analyzes the performance of input variable selection techniques from a series of perspectives. It then expands the categorization and assessment of sensitivity analysis methods in a structured framework that takes into account the relationship between inputs and outputs, the nature of their time series, and the computational effort required. The outcome of this analysis is that established methods have a limited suitability for use by time-critical variable selection applications. By way of a geological drilling monitoring scenario, the suitability of the proposed EventTracker Sensitivity Analysis method for use in high volume and time critical input variable selection problems is demonstrated.E

    The needs and benefits of Text Mining applications on Post-Project Reviews

    Get PDF
    Post Project Reviews (PPRs) are a rich source of knowledge and data for organisations - if organisations have the time and resources to analyse them. Too often these reports are stored, unread by many who could benefit from them. PPR reports attempt to document the project experience – both good and bad. If these reports were analysed collectively, they may expose important detail, e.g. recurring problems or examples of good practice, perhaps repeated across a number of projects. However, because most companies do not have the resources to thoroughly examine PPR reports, either individually or collectively, important insights and opportunities to learn from previous projects, are missed. This research explores the application of knowledge discovery techniques and text mining to uncover patterns, associations, and trends from PPR reports. The results might then be used to address problem areas, enhance processes, and improve customer relationships. A case study related to two construction companies is presented in this paper and knowledge discovery techniques are used to analyze 50 PPR reports collected during the last three years. The case study has been examined in six contexts and the results show that Text Mining has a good potential to improve overall knowledge reuse and exploitation

    What broke where for distributed and parallel applications — a whodunit story

    Get PDF
    Detection, diagnosis and mitigation of performance problems in today\u27s large-scale distributed and parallel systems is a difficult task. These large distributed and parallel systems are composed of various complex software and hardware components. When the system experiences some performance or correctness problem, developers struggle to understand the root cause of the problem and fix in a timely manner. In my thesis, I address these three components of the performance problems in computer systems. First, we focus on diagnosing performance problems in large-scale parallel applications running on supercomputers. We developed techniques to localize the performance problem for root-cause analysis. Parallel applications, most of which are complex scientific simulations running in supercomputers, can create up to millions of parallel tasks that run on different machines and communicate using the message passing paradigm. We developed a highly scalable and accurate automated debugging tool called PRODOMETER, which uses sophisticated algorithms to first, create a logical progress dependency graph of the tasks to highlight how the problem spread through the system manifesting as a system-wide performance issue. Second, uses this logical progress dependence graph to identify the task where the problem originated. Finally, PRODOMETER pinpoints the code region corresponding to the origin of the bug. Second, we developed a tool-chain that can detect performance anomaly using machine-learning techniques and can achieve very low false positive rate. Our input-aware performance anomaly detection system consists of a scalable data collection framework to collect performance related metrics from different granularity of code regions, an offline model creation and prediction-error characterization technique, and a threshold based anomaly-detection-engine for production runs. Our system requires few training runs and can handle unknown inputs and parameter combinations by dynamically calibrating the anomaly detection threshold according to the characteristics of the input data and the characteristics of the prediction-error of the models. Third, we developed performance problem mitigation scheme for erasure-coded distributed storage systems. Repair operations of the failed blocks in erasure-coded distributed storage system take really long time in networked constrained data-centers. The reason being, during the repair operation for erasure-coded distributed storage, a lot of data from multiple nodes are gathered into a single node and then a mathematical operation is performed to reconstruct the missing part. This process severely congests the links toward the destination where newly recreated data is to be hosted. We proposed a novel distributed repair technique, called Partial-Parallel-Repair (PPR) that performs this reconstruction in parallel on multiple nodes and eliminates network bottlenecks, and as a result, greatly speeds up the repair process. Fourth, we study how for a class of applications, performance can be improved (or performance problems can be mitigated) by selectively approximating some of the computations. For many applications, the main computation happens inside a loop that can be logically divided into a few temporal segments, we call phases. We found that while approximating the initial phases might severely degrade the quality of the results, approximating the computation for the later phases have very small impact on the final quality of the result. Based on this observation, we developed an optimization framework that for a given budget of quality-loss, would find the best approximation settings for each phase in the execution

    Knowledge discovery for moderating collaborative projects

    Get PDF
    In today's global market environment, enterprises are increasingly turning towards collaboration in projects to leverage their resources, skills and expertise, and simultaneously address the challenges posed in diverse and competitive markets. Moderators, which are knowledge based systems have successfully been used to support collaborative teams by raising awareness of problems or conflicts. However, the functioning of a moderator is limited to the knowledge it has about the team members. Knowledge acquisition, learning and updating of knowledge are the major challenges for a Moderator's implementation. To address these challenges a Knowledge discOvery And daTa minINg inteGrated (KOATING) framework is presented for Moderators to enable them to continuously learn from the operational databases of the company and semi-automatically update the corresponding expert module. The architecture for the Universal Knowledge Moderator (UKM) shows how the existing moderators can be extended to support global manufacturing. A method for designing and developing the knowledge acquisition module of the Moderator for manual and semi-automatic update of knowledge is documented using the Unified Modelling Language (UML). UML has been used to explore the static structure and dynamic behaviour, and describe the system analysis, system design and system development aspects of the proposed KOATING framework. The proof of design has been presented using a case study for a collaborative project in the form of construction project supply chain. It has been shown that Moderators can "learn" by extracting various kinds of knowledge from Post Project Reports (PPRs) using different types of text mining techniques. Furthermore, it also proposed that the knowledge discovery integrated moderators can be used to support and enhance collaboration by identifying appropriate business opportunities and identifying corresponding partners for creation of a virtual organization. A case study is presented in the context of a UK based SME. Finally, this thesis concludes by summarizing the thesis, outlining its novelties and contributions, and recommending future research
    • 

    corecore