3,684 research outputs found

    Modeling Investment Decisions in Gas Pipeline Networks: A Game-theoretic Mixed Complementarity Problem Formulation and its Application to the Chinese Gas Market

    Get PDF
    Rapidly growing natural gas demand in China has formed a precondition to investigate the potential of several pipelines from Russia and other CIS countries, which could possibly head their flows to the Chinese natural gas market. The variety of proposed projects and the difference in their characteristics results in competition and, therefore, in order to assess the projects. economic perspectives a game-theoretic approach is used. A two-step procedure for the problem's game-theoretic formulation is proposed. In the instantaneous supply game the players have the possibility to supply natural gas to the market. Each player determines his or her supply levels in such a way that profits are maximized, taking into account the supplies of all other players. In the game of timing the players have to decide when to construct their pipelines. Due to the growing demand for natural gas, the time of entering the market plays an important role. The players seek to maximize the net present value of their investment by choosing the pipelines. start of commercial operation from a discrete set of years. The equilibria obtained from the supply game are used to calculate the payoff matrices for each pipeline in the timing game. Both games are formulated and implemented as mixed complementarity problems which allows the analysis of the competition of up to 5 pipeline projects simultaneously. Results for the Chinese natural gas market are presented and discussed with an emphasis on sensitivity analysis

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment

    MDA-Based Reverse Engineering

    Get PDF

    Software Evolution for Industrial Automation Systems. Literature Overview

    Get PDF

    Uncertainty handling in named entity extraction and disambiguation for informal text

    Get PDF
    Social media content represents a large portion of all textual content appearing on the Internet. These streams of user generated content (UGC) provide an opportunity and challenge for media analysts to analyze huge amount of new data and use them to infer and reason with new information. A main challenge of natural language is its ambiguity and vagueness. To automatically resolve ambiguity, the grammatical structure of sentences is used. However, when we move to informal language widely used in social media, the language becomes more ambiguous and thus more challenging for automatic understanding.\ud Information Extraction (IE) is the research field that enables the use of unstructured text in a structured way. Named Entity Extraction (NEE) is a sub task of IE that aims to locate phrases (mentions) in the text that represent names of entities such as persons, organizations or locations regardless of their type. Named Entity Disambiguation (NED) is the task of determining which correct person, place, event, etc. is referred to by a mention.\ud The goal of this paper is to provide an overview on some approaches that mimic the human way of recognition and disambiguation of named entities especially for domains that lack formal sentence structure. The proposed methods open the doors for more sophisticated applications based on users’ contributions on social media. We propose a robust combined framework for NEE and NED in semi-formal and informal text. The achieved robustness has been proven to be valid across languages and domains and to be independent of the selected extraction and disambiguation techniques. It is also shown to be robust against the informality of the used language. We have discovered a reinforcement effect and exploited it a technique that improves extraction quality by feeding back disambiguation results. We present a method of handling the uncertainty involved in extraction to improve the disambiguation results

    Framework for collaborative knowledge management in organizations

    Get PDF
    Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, current knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Moreover, with the increase of Internet usage, the amount of available information explodes. It leads to the observed progress in the creation of mechanisms to retrieve useful knowledge from the huge existent amount of information sources. However, a same knowledge representation of a thing could mean differently to different people and applications. Contributing towards this direction, this thesis proposes a framework capable of gathering the knowledge held by domain experts and domain sources through a knowledge management system and transform it into explicit ontologies. This enables to build tools with advanced reasoning capacities with the aim to support enterprises decision-making processes. The author also intends to address the problem of knowledge transference within an among organizations. This will be done through a module (part of the proposed framework) for domain’s lexicon establishment which purpose is to represent and unify the understanding of the domain’s used semantic
    • …
    corecore