157 research outputs found

    Data fusion for system modeling, performance assessment and improvement

    Get PDF
    Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics. This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems. This thesis started from investigating the optimal sensor system design and conducting multiple sensor data fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal sensor allocation design at minimum cost under different user specified detection requirements. Chapter 3 extended this line of research by proposing a novel adaptive sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process. Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA) sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the fusion center, when only limited resources are available. To improve the accuracy of remaining lifetime prediction, Chapter 5 proposed a data-level fusion methodology for degradation modeling and prognostics. When multiple sensors are available to measure the degradation mechanism of a same system, it becomes a high dimensional and challenging problem to determine which sensors to use and how to combine them together for better data analysis. To address this issue, we first defined two essential properties if present in a degradation signal, can enhance the effectiveness for prognostics. Then, we proposed a generic data-level fusion algorithm to construct a composite health index to achieve those two identified properties. The methodology was tested using the degradation signals of aircraft gas turbine engine, which demonstrated a much better prognostic result compared to relying solely on the data from an individual sensor. In summary, this thesis is the research drawing attention to the area of data fusion for effective employment of the underlying data gathering capabilities for system modeling, performance assessment and improvement. The fundamental data fusion methodologies are developed and further applied to various applications, which can facilitate resources planning, real-time monitoring, diagnosis and prognostics.Ph.D

    Impact of wind farm wake steering control on blade root load

    Get PDF
    Yaw misalignment is known to affect blade root loads on wind turbines. Most of previous studies concentrate on yaw misalignment in the context of wake steering control, aiming at increasing the total output power of the wind farm. There, wake steering is compared with greedy control, in which yaw misalignment is considered to be 0. In reality, yaw misalignment also occurs in greedy control due to changes in wind direction arising from varying inflow conditions (e.g. turbulence). This paper aims at comparing these two sources of yaw misalignment-naturally changing wind direction versus active yaw in wake steering-in terms of blade root loads. To this end, SCADA data from a real wind farm is used to get yaw misalignment statistics in actual greedy control conditions. FAST.Farm is used to simulate three wind turbines arranged in series, to study maximum and damage-equivalent loads corresponding to in-plane and out-of-plane bending moments on the blades. The results show that compared with actual greedy control, wake steering control reduces the maximum load from the upstream wind turbine, but increases it from other wind turbines. Concerning the damage-equivalent loads from all wind turbines, the blade's in-plane moment is reduced, but the blade's out-of-plane moment is increased.Impact of wind farm wake steering control on blade root loadacceptedVersio

    Speedy Error Reconciliation

    Get PDF
    Introducing small errors in the lattice-based key exchange protocols, although it is resistant to quantum computing attacks, will cause both parties to only get roughly equal secret values, which brings uncertainty to the negotiation of the key agreement. The role of the error reconciliation mechanism is to eliminate this uncertainty and ensure that both parties can reach a consensus. This paper designs a new error reconciliation mechanism: Speedy Error Reconciliation (SER), which can efficiently complete key negotiation while ensuring key correctness and security. SER exploits the properties of the approximate secret values σ1 and σ2 shared by the two parties, and simultaneously reconciles the most and least significant bits of the secret value, and a two-bit key can be obtained by one coordination. By sharing g-bit auxiliary information between two entities, SER expands the fault tolerance interval during reconciliation and improves the success rate of consensus. To test the actual performance of SER, we integrate it into key ex- change protocols based on LWE, RLWE, and MLWE, such as Frodo and NewHope. By comparing parameters such as failure rate, security strength, and the number of CPU rounds, we find that SER performs well in various modes, especially in RLWE-based protocol. Since SER doubles the error to reconcile the least significant bit, which in turn leads to a relatively large error in SER; while the RLWE-based key ex- change scheme adopts a polynomial ring and selects a large parameter q, which is very suitable for SER. Compared with Frodo and NewHope, SER improves the reconciliation efficiency of the per-bit key by 61.6% and 797.6%, respectively
    corecore