1,886 research outputs found

    Toward a process theory of entrepreneurship: revisiting opportunity identification and entrepreneurial actions

    Get PDF
    This dissertation studies the early development of new ventures and small business and the entrepreneurship process from initial ideas to viable ventures. I unpack the micro-foundations of entrepreneurial actions and new ventures’ investor communications through quality signals to finance their growth path. This dissertation includes two qualitative papers and one quantitative study. The qualitative papers employ an inductive multiple-case approach and include seven medical equipment manufacturers (new ventures) in a nascent market context (the mobile health industry) across six U.S. states and a secondary data analysis to understand the emergence of opportunities and the early development of new ventures. The quantitative research chapter includes 770 IPOs in the manufacturing industries in the U.S. and investigates the legitimation strategies of young ventures to gain resources from targeted resource-holders.Open Acces

    Prognostic Algorithms for Condition Monitoring and Remaining Useful Life Estimation

    Get PDF
    To enable the benets of a truly condition-based maintenance philosophy to be realised, robust, accurate and reliable algorithms, which provide maintenance personnel with the necessary information to make informed maintenance decisions, will be key. This thesis focuses on the development of such algorithms, with a focus on semiconductor manufacturing and wind turbines. An introduction to condition-based maintenance is presented which reviews dierent types of maintenance philosophies and describes the potential benets which a condition- based maintenance philosophy will deliver to operators of critical plant and machinery. The issues and challenges involved in developing condition-based maintenance solutions are discussed and a review of previous approaches and techniques in fault diagnostics and prognostics is presented. The development of a condition monitoring system for dry vacuum pumps used in semi- conductor manufacturing is presented. A notable feature is that upstream process mea- surements from the wafer processing chamber were incorporated in the development of a solution. In general, semiconductor manufacturers do not make such information avail- able and this study identies the benets of information sharing in the development of condition monitoring solutions, within the semiconductor manufacturing domain. The developed solution provides maintenance personnel with the ability to identify, quantify, track and predict the remaining useful life of pumps suering from degradation caused by pumping large volumes of corrosive uorine gas. A comprehensive condition monitoring solution for thermal abatement systems is also presented. As part of this work, a multiple model particle ltering algorithm for prog- nostics is developed and tested. The capabilities of the proposed prognostic solution for addressing the uncertainty challenges in predicting the remaining useful life of abatement systems, subject to uncertain future operating loads and conditions, is demonstrated. Finally, a condition monitoring algorithm for the main bearing on large utility scale wind turbines is developed. The developed solution exploits data collected by onboard supervisory control and data acquisition (SCADA) systems in wind turbines. As a result, the developed solution can be integrated into existing monitoring systems, at no additional cost. The potential for the application of multiple model particle ltering algorithm to wind turbine prognostics is also demonstrated

    Relative Control and Management Philosophy

    Get PDF

    Big-Data Solutions for Manufacturing Health Monitoring and Log Analytics

    Get PDF
    Modern semiconductor manufacturing is a complex process with a multitude of software applications. This application landscape has to be constantly monitored, since the communication and access patterns provide important insights. Because of the high event rates of the equipment log data stream in modern factories, big-data tools are required for scalable state and history analytics. The choice of suitable big-data solutions and their technical realization remains a challenging task. This thesis compares big-data architectures and discovers solutions for log-data ingest, enrichment, analytics and visualization. Based on the use cases and requirements of developers working in this field, a comparison of a custom assembled stack and a complete solution is made. Since the complete stack is a preferable solution, Datadog, Grafana Loki and the Elastic 8 Stack are selected for a more detailed study. These three systems are implemented and compared based on the requirements. All three systems are well suited for big-data logging and fulfill most of the requirements, but show different capabilities when implemented and used.:1 Introduction 1.1 Motivation 1.2 Structure 2 Fundamentals and Prerequisites 2.1 Logging 2.1.1 Log level 2.1.2 CSFW log 2.1.3 SECS log 2.2 Existing system and data 2.2.1 Production process 2.2.2 Log data in numbers 2.3 Requirements 2.3.1 Functional requirements 2.3.2 System requirements 2.3.3 Quality requirements 2.4 Use Cases 2.4.1 Finding specific communication sequence 2.4.2 Watching system changes 2.4.3 Comparison with expected production path 2.4.4 Enrichment with metadata 2.4.5 Decoupled log analysis 3 State of the Art and Potential Software Stacks 3.1 State of the art software stacks 3.1.1 IoT flow monitoring system 3.1.2 Big-Data IoT monitoring system 3.1.3 IoT Cloud Computing Stack 3.1.4 Big-Data Logging Architecture 3.1.5 IoT Energy Conservation System 3.1.6 Similarities of the architectures 3.2 Selection of software stack 3.2.1 Components for one layer 3.2.2 Software solutions for the stack 4 Analysis and Implementation 4.1 Full stack vs. a custom assembled stack 4.1.1 Drawbacks of a custom assembled stack 4.1.2 Advantages of a complete solution 4.1.3 Exclusion of a custom assembled stack 4.2 Selection of full stack solutions 4.2.1 Elastic vs. Amazon 4.2.2 Comparison of Cloud-Only-Solutions 4.2.3 Comparison of On-Premise-Solutions 4.3 Implementation of selected solutions 4.3.1 Datadog 4.3.2 Grafana Loki Stack 4.3.3 Elastic 8 Stack 5 Comparison 5.1 Comparison of components 5.1.1 Collection 5.1.2 Analysis 5.1.3 Visualization 5.2 Comparison of requirements 5.2.1 Functional requirements 5.2.2 System requirements 5.2.3 Quality requirements 5.3 Results 6 Conclusion and Future Work 6.1 Conclusion 6.2 Future WorkDie moderne Halbleiterfertigung ist ein komplexer Prozess mit einer Vielzahl von Softwareanwendungen. Diese Anwendungslandschaft muss ständig überwacht werden, da die Kommunikations- und Zugriffsmuster wichtige Erkenntnisse liefern. Aufgrund der hohen Ereignisraten des Logdatenstroms der Maschinen in modernen Fabriken werden Big-Data-Tools für skalierbare Zustands- und Verlaufsanalysen benötigt. Die Auswahl geeigneter Big-Data-Lösungen und deren technische Umsetzung ist eine anspruchsvolle Aufgabe. Diese Arbeit vergleicht Big-Data-Architekturen und untersucht Lösungen für das Sammeln, Anreicherung, Analyse und Visualisierung von Log-Daten. Basierend auf den Use Cases und den Anforderungen von Entwicklern, die in diesem Bereich arbeiten, wird ein Vergleich zwischen einem individuell zusammengestellten Stack und einer Komplettlösung vorgenommen. Da die Komplettlösung vorteilhafter ist, werden Datadog, Grafana Loki und der Elastic 8 Stack für eine genauere Untersuchung ausgewählt. Diese drei Systeme werden auf der Grundlage der Anforderungen implementiert und verglichen. Alle drei Systeme eignen sich gut für Big-Data-Logging und erfüllen die meisten Anforderungen, zeigen aber unterschiedliche Fähigkeiten bei der Implementierung und Nutzung.:1 Introduction 1.1 Motivation 1.2 Structure 2 Fundamentals and Prerequisites 2.1 Logging 2.1.1 Log level 2.1.2 CSFW log 2.1.3 SECS log 2.2 Existing system and data 2.2.1 Production process 2.2.2 Log data in numbers 2.3 Requirements 2.3.1 Functional requirements 2.3.2 System requirements 2.3.3 Quality requirements 2.4 Use Cases 2.4.1 Finding specific communication sequence 2.4.2 Watching system changes 2.4.3 Comparison with expected production path 2.4.4 Enrichment with metadata 2.4.5 Decoupled log analysis 3 State of the Art and Potential Software Stacks 3.1 State of the art software stacks 3.1.1 IoT flow monitoring system 3.1.2 Big-Data IoT monitoring system 3.1.3 IoT Cloud Computing Stack 3.1.4 Big-Data Logging Architecture 3.1.5 IoT Energy Conservation System 3.1.6 Similarities of the architectures 3.2 Selection of software stack 3.2.1 Components for one layer 3.2.2 Software solutions for the stack 4 Analysis and Implementation 4.1 Full stack vs. a custom assembled stack 4.1.1 Drawbacks of a custom assembled stack 4.1.2 Advantages of a complete solution 4.1.3 Exclusion of a custom assembled stack 4.2 Selection of full stack solutions 4.2.1 Elastic vs. Amazon 4.2.2 Comparison of Cloud-Only-Solutions 4.2.3 Comparison of On-Premise-Solutions 4.3 Implementation of selected solutions 4.3.1 Datadog 4.3.2 Grafana Loki Stack 4.3.3 Elastic 8 Stack 5 Comparison 5.1 Comparison of components 5.1.1 Collection 5.1.2 Analysis 5.1.3 Visualization 5.2 Comparison of requirements 5.2.1 Functional requirements 5.2.2 System requirements 5.2.3 Quality requirements 5.3 Results 6 Conclusion and Future Work 6.1 Conclusion 6.2 Future Wor

    Machine Learning-based Predictive Maintenance for Optical Networks

    Get PDF
    Optical networks provide the backbone of modern telecommunications by connecting the world faster than ever before. However, such networks are susceptible to several failures (e.g., optical fiber cuts, malfunctioning optical devices), which might result in degradation in the network operation, massive data loss, and network disruption. It is challenging to accurately and quickly detect and localize such failures due to the complexity of such networks, the time required to identify the fault and pinpoint it using conventional approaches, and the lack of proactive efficient fault management mechanisms. Therefore, it is highly beneficial to perform fault management in optical communication systems in order to reduce the mean time to repair, to meet service level agreements more easily, and to enhance the network reliability. In this thesis, the aforementioned challenges and needs are tackled by investigating the use of machine learning (ML) techniques for implementing efficient proactive fault detection, diagnosis, and localization schemes for optical communication systems. In particular, the adoption of ML methods for solving the following problems is explored: - Degradation prediction of semiconductor lasers, - Lifetime (mean time to failure) prediction of semiconductor lasers, - Remaining useful life (the length of time a machine is likely to operate before it requires repair or replacement) prediction of semiconductor lasers, - Optical fiber fault detection, localization, characterization, and identification for different optical network architectures, - Anomaly detection in optical fiber monitoring. Such ML approaches outperform the conventionally employed methods for all the investigated use cases by achieving better prediction accuracy and earlier prediction or detection capability

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    THE DEVELOPMENT OF A MECHATRONICS AND MATERIAL HANDLING COURSE: LABORATORY EXPERIMENTS AND PROJECTS

    Get PDF
    Mechatronic systems integrate technologies from a variety of engineering disciplines to create solutions to challenging industrial problems. The material handling industry utilizes mechatronics to move, track, and manipulate items in factories and distribution centers. Material handling systems, because of their use of programmable logic controllers (PLC), PLC networks, industrial robotics, and other mechatronic elements, are a natural choice for a college instructional environment. This thesis offers insight and guidance for mechatronic activities introduced in a laboratory setting. A series of eight laboratory experiments have been created to introduce PLCs, robotics, electric circuits, and data acquisition fundamentals. In-depth case studies synthesize the technologies and interpersonal skills together to create a flexible material handling system. Student response to the course and laboratory material was exceptional. A pre and post course questionnaire was administered which covered topics such as teamwork, human factors, business methods, and various engineering related questions. Quantitative scores resulting from these questionnaires showed a marked improvement by students, especially in regards to technical/engineering questions. The responses from students generally indicated an excitement about course material and a thorough understanding of the various syllabus topics. In this thesis, the multi-disciplinary mechatronics (and material handling systems) laboratory will be presented. An in-depth examination of each laboratory will be offered as well as the discussion of two material handling case studies. The Appendixes contain the PLC and robot code for a order fulfillment case study

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    Schedule performance measurement based on statistical process control charts

    Get PDF
    In a job-shop manufacturing environment, achieving a schedule that is on target is difficult due to the dynamism of factors affecting the system, and this makes schedule performance measurement systems hard to design and implement. In the present paper, Statistical Process Control charts are directly applied to a scheduling process for the purpose of objectively measuring schedule performance. SPC charts provide an objective and timely approach to designing, implementing and monitoring schedule performance. However, the use of Statistical Process Control charts requires an appreciation of the conditions for applying raw data to SPC charts. In the present paper, the Shewart’s Individuals control chart are applied to monitor the deviations of actual process times from the scheduled process times for each job on a process machine. The Individuals control charts are highly sensitive to non-normal data, which increases the rate of false alarms, but this can be avoided using data transformation operations such as the Box-Cox transformation. Statistical Process Control charts have not been used to measure schedule performance in a job shop setting, so this paper uniquely contributes to research in this area. In addition, using our proposed methodology enables a scheduler to monitor how an optimal schedule has performed on the shop floor, study the variations between planned and actual outcomes, seek ways of eliminating these variations and check if process improvements have been effective

    Hierarchical Control of the ATLAS Experiment

    Get PDF
    Control systems at High Energy Physics (HEP) experiments are becoming increasingly complex mainly due to the size, complexity and data volume associated to the front-end instrumentation. In particular, this becomes visible for the ATLAS experiment at the LHC accelerator at CERN. ATLAS will be the largest particle detector ever built, result of an international collaboration of more than 150 institutes. The experiment is composed of 9 different specialized sub-detectors that perform different tasks and have different requirements for operation. The system in charge of the safe and coherent operation of the whole experiment is called Detector Control System (DCS). This thesis presents the integration of the ATLAS DCS into a global control tree following the natural segmentation of the experiment into sub-detectors and smaller sub-systems. The integration of the many different systems composing the DCS includes issues such as: back-end organization, process model identification, fault detection, synchronization with external systems, automation of processes and supervisory control. Distributed control modeling is applied to the widely distributed devices that coexist in ATLAS. Thus, control is achieved by means of many distributed, autonomous and co-operative entities that are hierarchically organized and follow a finite-state machine logic. The key to integration of these systems lies in the so called Finite State Machine tool (FSM), which is based on two main enabling technologies: a SCADA product, and the State Manager Interface (SMI++) toolkit. The SMI++ toolkit has been already used with success in two previous HEP experiments providing functionality such as: an object-oriented language, a finite-state machine logic, an interface to develop expert systems, and a platform-independent communication protocol. This functionality is then used at all levels of the experiment operation process, ranging from the overall supervision down to device integration, enabling the overall sequencing and automation of the experiment. Although the experience gained in the past is an important input for the design of the detector's control hierarchy, further requirements arose due to the complexity and size of ATLAS. In total, around 200.000 channels will be supervised by the DCS and the final control tree will be hundreds of times bigger than any of the antecedents. Thus, in order to apply a hierarchical control model to the ATLAS DCS, a common approach has been proposed to ensure homogeneity between the large-scale distributed software ensembles of sub-detectors. A standard architecture and a human interface have been defined with emphasis on the early detection, monitoring and diagnosis of faults based on a dynamic fault-data mechanism. This mechanism relies on two parallel communication paths that manage the faults while providing a clear description of the detector conditions. The DCS information is split and handled by different types of SMI++ objects; whilst one path of objects manages the operational mode of the system, the other is to handle eventual faults. The proposed strategy has been validated through many different tests with positive results in both functionality and performance. This strategy has been successfully implemented and constitutes the ATLAS standard to build the global control tree. During the operation of the experiment, the DCS, responsible for the detector operation, must be synchronized with the data acquisition system which is in charge of the physics data taking process. The interaction between both systems has so far been limited, but becomes increasingly important as the detector nears completion. A prototype implementation, ready to be used during the sub-detector integration, has achieved data reconciliation by mapping the different segments of the data acquisition system into the DCS control tree. The adopted solution allows the data acquisition control applications to command different DCS sections independently and prevents incorrect physics data taking caused by a failure in a detector part. Finally, the human-machine interface presents and controls the DCS data in the ATLAS control room. The main challenges faced during the design and development phases were: how to support the operator in controlling this large system, how to maintain integration across many displays, and how to provide an effective navigation. These issues have been solved by combining the functionalities provided by both, the SCADA product and the FSM tool. The control hierarchy provides an intuitive structure for the organization of many different displays that are needed for the visualization of the experiment conditions. Each node in the tree represents a workspace that contains the functional information associated with its abstraction level within the hierarchy. By means of an effective navigation, any workspace of the control tree is accessible by the operator or detector expert within a common human interface layout. The interface is modular and flexible enough to be accommodated to new operational scenarios, fulfil the necessities of the different kind of users and facilitate the maintenance during the long lifetime of the detector of up to 20 years. The interface is in use since several months, and the sub-detector's control hierarchies, together with their associated displays, are currently being integrated into the common human-machine interface
    • …
    corecore