184 research outputs found

    Advanced Process Monitoring for Industry 4.0

    Get PDF
    This book reports recent advances on Process Monitoring (PM) to cope with the many challenges raised by the new production systems, sensors and “extreme data” conditions that emerged with Industry 4.0. Concepts such as digital-twins and deep learning are brought to the PM arena, pushing forward the capabilities of existing methodologies to handle more complex scenarios. The evolution of classical paradigms such as Latent Variable modeling, Six Sigma and FMEA are also covered. Applications span a wide range of domains such as microelectronics, semiconductors, chemicals, materials, agriculture, as well as the monitoring of rotating equipment, combustion systems and membrane separation processes

    Virtual metrology for semiconductor manufacturing applications

    Get PDF
    Per essere competitive nel mercato, le industrie di semiconduttori devono poter raggiungere elevati standard di produzione a un prezzo ragionevole. Per motivi legati tanto ai costi quanto ai tempi di esecuzione, una strategia di controllo della qualità che preveda la misurazione completa del prodotto non è attuabile; i test sono eettuati su un ristretto campione dei dati originali. Il traguardo del presente lavoro di Tesi è lo studio e l'implementazione, attraverso metodologie di modellistica tipo non lineare, di un algoritmo di metrologia virtuale (Virtual Metrology) d'ausilio al controllo di processo nella produzione di semiconduttori. Infatti, la conoscenza di una stima delle misure non realmente eseguite (misure virtuali) può rappresentare un primo passo verso la costruzione di sistemi di controllo di processo e controllo della qualità sempre più ranati ed ecienti. Da un punto di vista operativo, l'obiettivo è fornire la più accurata stima possibile delle dimensioni critiche a monte della fase di etching, a partire dai dati disponibili (includendo misurazioni da fasi di litograa e deposizione e dati di processo - ove disponibili). Le tecniche statistiche allo stato dell'arte analizzate in questo lavoro comprendono: - multilayer feedforward networks; Confronto e validazione degli algoritmi presi in esame sono stati possibili grazie ai data-set forniti da un'industria manifatturiera di semiconduttori. In conclusione, questo lavoro di Tesi rappresenta un primo passo verso la creazione di un sistema di controllo di processo e controllo della qualità evoluto e essibile, che abbia il ne ultimo di migliorare la qualità della produzione.ope

    30th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems. The amount and complexity of information itself, the number of abstraction levels of information, and the size of databases and knowledge bases are continuously growing. Conceptual modelling is one of the sub-areas of information modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    Machine Learning for Cyber Physical Systems

    Get PDF
    This open access proceedings presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains selected papers from the fifth international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Berlin, March 12-13, 2020. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments

    Development of Machine Learning based approach to predict fuel consumption and maintenance cost of Heavy-Duty Vehicles using diesel and alternative fuels

    Get PDF
    One of the major contributors of human-made greenhouse gases (GHG) namely carbon dioxide (CO2), methane (CH4), and nitrous oxide (NOX) in the transportation sector and heavy-duty vehicles (HDV) contributing to about 27% of the overall fraction. In addition to the rapid increase in global temperature, airborne pollutants from diesel vehicles also present a risk to human health. Even a small improvement that could potentially drive energy savings to the century-old mature diesel technology could yield a significant impact on minimizing greenhouse gas emissions. With the increasing focus on reducing emissions and operating costs, there is a need for efficient and effective methods to predict fuel consumption, maintenance costs, and total cost of ownership for heavy-duty vehicles. Every improvement so achieved in this direction is a direct contributor to driving the reduction in the total cost of ownership for a fleet owner, thereby bringing economic prosperity and reducing oil imports for the economy. Motivated by these crucial goals, the present research considers integrating data-driven techniques using machine learning algorithms on the historical data collected from medium- and heavy-duty vehicles. The primary motivation for this research is to address the challenges faced by the medium- and heavy-duty transportation industry in reducing emissions and operating costs. The development of a machine learning-based approach can provide a more accurate and reliable prediction of fuel consumption and maintenance costs for medium- and heavy-duty vehicles. This, in turn, can help fleet owners and operators to make informed decisions related to fuel type, route planning, and vehicle maintenance, leading to reduced emissions and lower operating costs. Artificial Intelligence (AI) in the automotive industry has witnessed massive growth in the last few years. Heavy-duty transportation research and commercial fleets are adopting machine learning (ML) techniques for applications such as autonomous driving, fuel economy/emissions, predictive maintenance, etc. However, to perform well, modern AI methods require a large amount of high-quality, diverse, and well-balanced data, something which is still not widely available in the automotive industry, especially in the division of medium- and heavy-duty trucks. The research methodology involves the collection of data at the West Virginia University (WVU) Center for Alternative Fuels, Engines, and Emissions (CAFEE) lab in collaboration with fleet management companies operating medium- and heavy-duty vehicles on diesel and alternative fuels, including compressed natural gas, liquefied propane gas, hydrogen fuel cells, and electric vehicles. The data collected is used to develop machine learning models that can accurately predict fuel consumption and maintenance costs based on various parameters such as vehicle weight, speed, route, fuel type, and engine type. The expected outcomes of this research include 1) the development of a neural network model 3 that can accurately predict the fuel consumed by a vehicle per trip given the parameters such as vehicle speed, engine speed, and engine load, and 2) the development of machine learning models for estimating the average cost-per-mile based on the historical maintenance data of goods movement trucks, delivery trucks, school buses, transit buses, refuse trucks, and vocational trucks using fuels such as diesel, natural gas, and propane. Due to large variations in maintenance data for vehicles performing various activities and using different fuel types, the regular machine learning or ensemble models do not generalize well. Hence, a mixed-effect random forest (MERF) is developed to capture the fixed and random effects that occur due to varying duty-cycle of vocational heavy-duty trucks that perform different tasks. The developed model helps in predicting the average maintenance cost given the vocation, fuel type, and region of operation, making it easy for fleet companies to make procurement decisions based on their requirement and total cost of ownership. Both the models can provide insights into the impact of various parameters and route planning on the total cost of ownership affected by the fuel cost and the maintenance and repairs cost. In conclusion, the development of a machine learning-based approach can provide a reliable and efficient solution to predict fuel consumption and maintenance costs impacting the total cost of ownership for heavy-duty vehicles. This, in turn, can help the transportation industry reduce emissions and operating costs, contributing to a more sustainable and efficient transportation system. These models can be optimized with more training data and deployed in a real-time environment such as cloud service or an onboard vehicle system as per the requirement of companies

    Semantic Exploration of Text Documents with Multi-Faceted Metadata Employing Word Embeddings: The Patent Landscaping Use Case

    Get PDF
    Die Menge der Veröentlichungen, die den wissenschaftlichen Fortschritt dokumentieren, wächst kontinuierlich. Dies erfordert die Entwicklung der technologischen Hilfsmittel für eine eziente Analyse dieser Werke. Solche Dokumente kennzeichnen sich nicht nur durch ihren textuellen Inhalt, sondern auch durch eine Menge von Metadaten-Attributen verschiedenster Art, unter anderem Beziehungen zwischen den Dokumenten. Diese Komplexität macht die Entwicklung eines Visualisierungsansatzes, der eine Untersuchung der schriftlichen Werke unterstützt, zu einer notwendigen und anspruchsvollen Aufgabe. Patente sind beispielhaft für das beschriebene Problem, weil sie in großen Mengen von Firmen untersucht werden, die sich Wettbewerbsvorteile verschaffen oder eigene Forschung und Entwicklung steuern wollen. Vorgeschlagen wird ein Ansatz für eine explorative Visualisierung, der auf Metadaten und semantischen Embeddings von Patentinhalten basiert ist. Wortembeddings aus einem vortrainierten Word2vec-Modell werden genutzt, um Ähnlichkeiten zwischen Dokumenten zu bestimmen. Darüber hinaus helfen hierarchische Clusteringmethoden dabei, mehrere semantische Detaillierungsgrade durch extrahierte relevante Stichworte anzubieten. Derzeit dürfte der vorliegende Visualisierungsansatz der erste sein, der semantische Embeddings mit einem hierarchischen Clustering verbindet und dabei diverse Interaktionstypen basierend auf Metadaten-Attributen unterstützt. Der vorgestellte Ansatz nimmt Nutzerinteraktionstechniken wie Brushing and Linking, Focus plus Kontext, Details-on-Demand und Semantic Zoom in Anspruch. Dadurch wird ermöglicht, Zusammenhänge zu entdecken, die aus dem Zusammenspiel von 1) Verteilungen der Metadatenwerten und 2) Positionen im semantischen Raum entstehen. Das Visualisierungskonzept wurde durch Benutzerinterviews geprägt und durch eine Think-Aloud-Studie mit Patentenexperten evaluiert. Während der Evaluation wurde der vorgestellte Ansatz mit einem Baseline-Ansatz verglichen, der auf TF-IDF-Vektoren basiert. Die Benutzbarkeitsstudie ergab, dass die Visualisierungsmetaphern und die Interaktionstechniken angemessen gewählt wurden. Darüber hinaus zeigte sie, dass die Benutzerschnittstelle eine deutlich größere Rolle bei den Eindrücken der Probanden gespielt hat als die Art und Weise, wie die Patente platziert und geclustert waren. Tatsächlich haben beide Ansätze sehr ähnliche extrahierte Clusterstichworte ergeben. Dennoch wurden bei dem semantischen Ansatz die Cluster intuitiver platziert und deutlicher abgetrennt. Das vorgeschlagene Visualisierungslayout sowie die Interaktionstechniken und semantischen Methoden können auch auf andere Arten von schriftlichen Werken erweitert werden, z. B. auf wissenschaftliche Publikationen. Andere Embeddingmethoden wie Paragraph2vec [61] oder BERT [32] können zudem verwendet werden, um kontextuelle Abhängigkeiten im Text über die Wortebene hinaus auszunutzen
    • …
    corecore