1,428,846 research outputs found

    Large deviations and averaging for systems of slow–fast reaction–diffusion equations

    Full text link
    We study a large deviation principle for a system of stochastic reaction--diffusion equations (SRDEs) with a separation of fast and slow components and small noise in the slow component. The derivation of the large deviation principle is based on the weak convergence method in infinite dimensions, which results in studying averaging for controlled SRDEs. By appropriate choice of the parameters, the fast process and the associated control that arises from the weak convergence method decouple from each other. We show that in this decoupling case one can use the weak convergence method to characterize the limiting process via a "viable pair" that captures the limiting controlled dynamics and the effective invariant measure simultaneously. The characterization of the limit of the controlled slow-fast processes in terms of viable pair enables us to obtain a variational representation of the large deviation action functional. Due to the infinite--dimensional nature of our set--up, the proof of tightness as well as the analysis of the limit process and in particular the proof of the large deviations lower bound is considerably more delicate here than in the finite--dimensional situation. Smoothness properties of optimal controls in infinite dimensions (a necessary step for the large deviations lower bound) need to be established. We emphasize that many issues that are present in the infinite dimensional case, are completely absent in finite dimensions.First author draf

    Large deviations and averaging for systems of slow--fast stochastic reaction--diffusion equations

    Full text link
    We study a large deviation principle for a system of stochastic reaction--diffusion equations (SRDEs) with a separation of fast and slow components and small noise in the slow component. The derivation of the large deviation principle is based on the weak convergence method in infinite dimensions, which results in studying averaging for controlled SRDEs. By appropriate choice of the parameters, the fast process and the associated control that arises from the weak convergence method decouple from each other. We show that in this decoupling case one can use the weak convergence method to characterize the limiting process via a "viable pair" that captures the limiting controlled dynamics and the effective invariant measure simultaneously. The characterization of the limit of the controlled slow-fast processes in terms of viable pair enables us to obtain a variational representation of the large deviation action functional. Due to the infinite--dimensional nature of our set--up, the proof of tightness as well as the analysis of the limit process and in particular the proof of the large deviations lower bound is considerably more delicate here than in the finite--dimensional situation. Smoothness properties of optimal controls in infinite dimensions (a necessary step for the large deviations lower bound) need to be established. We emphasize that many issues that are present in the infinite dimensional case, are completely absent in finite dimensions

    Top Yukawa coupling measurement with indefinite CP Higgs in e+ettˉΦe^+e^-\to t\bar{t}\Phi

    Full text link
    We consider the issue of the top quark Yukawa coupling measurement in a model in dependent and general case with the inclusion of CP-violation in the coupling. Arguably the best process to study this coupling is the associa ted production of Higgs boson along with a ttˉt\bar t pair in a machine like the International Linear Collider (ILC). While detailed analyses of the sensitivity of the measurement assuming a Standard Model (SM) - like coupling are available in the context of ILC, conclude that th e coupling could be pinned down at about 10\% level with modest luminosity, our investigations show that the scenario could be different in case of a more general coupling. The modified Lorentz structure resulting in a changed functional dependence of the cross section on the couplin g, along with the difference in the cross section itself leads to considerable deviation in the sensitivity. Our studies with an ILC of center of mass energies of 500 GeV, 800 GeV and 1000 GeV show that moderate CP-mixing in the Higgs sector could change the sensitivity to about 20\ %, while it could be worsened to 75\% in cases which could accommodate more dramatic changes in the coupling. While detailed considerations of the decay distributions point to a need for a relook at the analysis strategy followed for the case of SM such as for a model independent analysis of the top quark Yukawa coupling measurement. This study strongly suggests that, a joint analysis of the CP properties and the Yukawa coupling measurement would be the way forward at the ILC and that caution must be excercised in the measurem ent of the Yukawa couplings and the conclusions drawn from it.Comment: 18 pages, 7 figures, uses revte

    Do Products Respond to User Desires? A Case Study. Errors and Successes in the Design Process, under the Umbrella of Emotional Design

    Get PDF
    This article introduces a methodological approach to the evaluation of different industrial products according to Norman's approach and dimensions, focusing on a specific case study. The study also shows different possibilities to guide industrial designers during the design process in order to create products with high emotional value. For those, the case study was done with 330 target specific users, submitting nine prototypes (designed for different targets) to the user experience evaluation and product perception analysis. The evaluated proposals were selected from a total of 45. The results show the visceral, behavioural and reflective levels perceived by those users to whom each product is intended, as well as the target deviation within the design process. In this sense, the emotional response reveals the asymmetric character of perception according to Norman's dimensions

    선박조종시뮬레이션의 최저 시행에 관한 연구

    Get PDF
    In the year of 2010 the Korean government introduced an enforced maritime traffic safety assessment act for the purpose of enhancing traffic safety. The act was mandated by the law to design a new port or modify an exiting one. According to Korea Maritime Safety Act, the assessment of propriety of marine traffic system comprises safety of channel transit and berthing/unberthing maneuver, safety of mooring, and safety of marine traffic flow. The safety of channel transit and berthing/unberthing maneuver can be evaluated only by ship-handling simulation which is carried out by sea pilots working with the port concerned. The vessel's proximity measure is an important factor for evaluating traffic safety. The proximity measure is composed of vessel's closest distance to channel boundary and probability of grounding/collision. Therefore, the probability of grounding cannot be ignored. According to central limit theorem, a sample has a normal distribution on condition that its size is more than 30. However, more than 30 simulation runs lead to an increase in the assessment period which results in difficulty in employing sea pilots. Hence, this paper aims to analyze the minimum sample size for evaluating vessel's proximity. In this research, mean and standard deviation of ten cases are obtained from the latest maritime traffic safety assessment. The probability of grounding is within 10-4. Then each case generates twenty random sample sets, each set constitutes the sample 3, 4, 5, 6, 7, 9 and 11,at the same time it calculates the h value and confidence interval of each sample sets. Then the box-plots which consists of twenty sample boxes is drawn, and the mean line and confidence interval are also shown on the box-plots. In the box-plots, the X-axis refers to the sample set, while the Y-axis is CPA to channel boundary. Based on the size of confidence interval, the change of confidence interval span, the relative position of mean line and box figures, it can be indicated that the minimum number of simulation should be more than 5. After accumulating the mean and standard deviation curves of confidence interval, when the size of simulation runs is larger than 5, a much smaller mean and standard deviation of confidence interval than those in 3 can be obtained. In each case of this study, the 20 sample sets of data are obtained randomly by the parameter mean and standard deviation. If the actual data is used, it is difficult to ensure the 20 sample sets of actual data have the same mean and standard deviation. As such, it is meaningless to use the actual data to analyse. Thus this study uses the random data instead of actual data to analyse. In conclusion, this paper proposes a minimum sample size of 5, that is, the simulation should be carried out more than five times. It is recommended that actual data should be used to carry out analysis and other tests other than KS test should be applied to goodness of fit for the sample distribution.Contents List of Tables ⅳ List of Figures ⅵ Abstract ⅷ 1. Introduction 1 1.1 Background and purpose of the study 1 1.2 Related researches of the study 2 1.3 Methodology of the study 2 2. Design of the study 4 2.1 Population and sample 4 2.2 Research process 4 2.3 Statistical method in the study 6 2.3.1 Introduction of box-plots 6 2.3.2 The use of box-plots and normal distribution function 7 3. Generation and analysis of random sample sets with less than collision probability of 10-5 10 3.1 Determination of parameters for generation of random sample set 10 3.2 Generation and analysis of random sample set from case 1 to case 5 11 3.2.1 Data of case 1 11 3.2.2 Box-figures and confidence interval of case 1 15 3.2.3 Data of case 2 16 3.2.4 Box-figures and confidence interval of case 2 20 3.2.5 Data of case 3 21 3.2.6 Box-figures and confidence interval of case 3 26 3.2.7 Data of case 4 26 3.2.8 Box-figures and confidence interval of case 4 32 3.2.9 Data of case 5 36 3.2.10 Box-figures and confidence interval of case 5 38 4. Generation and analysis of random sample sets with less than collision probability of 10-8 40 4.1 Determination of parameters for generation of random sample set 40 4.2 Generation and analysis of random sample set from case 6 to case 10 41 4.2.1 Data of case 6 41 4.2.2 Box-figures and confidence interval of case 6 42 4.2.3 Data of case 7 46 4.2.4 Box-figures and confidence interval of case 7 51 4.2.5 Data of case 8 52 4.2.6 Box-figures and confidence interval of case 8 57 4.2.7 Data of case 9 58 4.2.8 Box-figures and confidence interval of case 9 63 4.2.9 Data of case 10 64 4.2.10 Box-figures and confidence interval of case 10 69 5. The determination of the minimum simulation runs 71 5.1 Analysis of the compare result 71 5.2 Analysis of the stacking of mean and standard deviation of confidence interval span 72 6. Conclusion 75 References 78 Acknowledgement 80 List of Tables Table 3.1 The mean and standard deviation of each case 10 Table 3.2 Sample sets of three(3) runs 11 Table 3.3 Sample sets of four(4) runs 11 Table 3.4 Sample sets of five(5) runs 12 Table 3.5 Sample sets of six(6) runs 13 Table 3.6 Sample sets of three(3) runs 16 Table 3.7 Sample sets of four(4) runs 16 Table 3.8 Sample sets of five(5) runs 17 Table 3.9 Sample sets of six(6) runs 18 Table 3.10 Sample sets of three(3) runs 21 Table 3.11 Sample sets of four(4) runs 22 Table 3.12 Sample sets of five(5) runs 22 Table 3.13 Sample sets of six(6) runs 23 Table 3.14 Sample sets of three(3) runs 27 Table 3.15 Sample sets of four(4) runs 27 Table 3.16 Sample sets of five(5) runs 28 Table 3.17 Sample sets of six(6) runs 29 Table 3.18 Sample sets of three(3) runs 33 Table 3.19 Sample sets of four(4) runs 34 Table 3.20 Sample sets of five(5) runs 34 Table 3.21 Sample sets of six(6) runs 35 Table 4.1 The mean and standard deviation of each case 40 Table 4.2 Sample sets of three(3) runs 41 Table 4.3 Sample sets of four(4) runs 41 Table 4.4 Sample sets of five(5) runs 42 Table 4.5 Sample sets of six(6) runs 43 Table 4.6 Sample sets of three(3) runs 46 Table 4.7 Sample sets of four(4) runs 47 Table 4.8 Sample sets of five(5) runs 47 Table 4.9 Sample sets of six(6) runs 48 Table 4.10 Sample sets of three(3) runs 52 Table 4.11 Sample sets of four(4) runs 52 Table 4.12 Sample sets of five(5) runs 53 Table 4.13 Sample sets of six(6) runs 54 Table 4.14 Sample sets of three(3) runs 58 Table 4.15 Sample sets of four(4) runs 59 Table 4.16 Sample sets of five(5) runs 59 Table 4.17 Sample sets of six(6) runs 60 Table 4.18 Sample sets of three(3) runs 64 Table 4.19 Sample sets of four(4) runs 64 Table 4.20 Sample sets of five(5) runs 65 Table 4.21 Sample sets of six(6) runs 66 Table 5.1 The compare result of confidence interval span of case 1 to case 5 71 Table 5.2 The compare result of confidence interval span of case 6 to case 10 71 List of Figures Fig. 2.1 The Process of the study 5 Fig. 2.2 The meaning of box-plot 7 Fig. 2.3 The relationship between the box-plot and normal distribution 8 Fig. 2.4 The box-plots of 3 simulation runs(μ=79.35, σ=20.29) 8 Fig. 2.5 The box-plots of 10000 simulation runs(μ=79.35, σ=20.29) 9 Fig. 3.1 Mean of confidence interval span of case1 14 Fig. 3.2 Standard deviation of confidence interval span of case1 14 Fig. 3.3 Box-plots of case 1 15 Fig. 3.4 Mean of confidence interval span of case 2 19 Fig. 3.5 Standard deviation of confidence interval span of case 2 19 Fig. 3.6 Box-plots of case 2 20 Fig. 3.7 Mean of confidence interval span of case 3 24 Fig. 3.8 Standard deviation of confidence interval span of case 3 24 Fig. 3.9 Box-plots of case 3 26 Fig. 3.10 Mean of confidence interval span of case 4 30 Fig. 3.11 Standard deviation of confidence interval span of case 4 30 Fig. 3.12 Box-plots of case 4 32 Fig. 3.13 Mean of confidence interval span of case 5 36 Fig. 3.14 Standard deviation of confidence interval span of case 5 36 Fig. 3.15 Box-plots of case 5 38 Fig. 4.1 Mean of confidence interval span of case 6 44 Fig. 4.2 Standard deviation of confidence interval span of case 6 44 Fig. 4.3 Box-plots of case 6 45 Fig. 4.7 Mean of confidence interval span of case 7 49 Fig. 4.8 Standard deviation of confidence interval span of case 7 49 Fig. 4.9 Box-plots of case 7 51 Fig. 4.7 Mean of confidence interval span of case 8 54 Fig. 4.8 Standard deviation of confidence interval span of case 8 55 Fig. 4.9 Box-plots of case 8 57 Fig. 4.10 Mean of confidence interval span of case 9 61 Fig. 4.11 Standard deviation of confidence interval span of case 9 61 Fig. 4.12 Box-plots of case 9 63 Fig. 4.13 Mean of confidence interval span of case 10 67 Fig. 4.14 Standard deviation of confidence interval span of case 10 67 Fig. 4.15 Box-plots of case 10 69 Fig. 5.1 The superposed curves of mean and standard deviation of case 1 73 Fig. 5.2 The superposed curves of mean and standard deviation of case 10 7

    Uncertainty Analysis for Data-Driven Chance-Constrained Optimization

    Get PDF
    In this contribution our developed framework for data-driven chance-constrained optimization is extended with an uncertainty analysis module. The module quantifies uncertainty in output variables of rigorous simulations. It chooses the most accurate parametric continuous probability distribution model, minimizing deviation between model and data. A constraint is added to favour less complex models with a minimal required quality regarding the fit. The bases of the module are over 100 probability distribution models provided in the Scipy package in Python, a rigorous case-study is conducted selecting the four most relevant models for the application at hand. The applicability and precision of the uncertainty analyser module is investigated for an impact factor calculation in life cycle impact assessment to quantify the uncertainty in the results. Furthermore, the extended framework is verified with data from a first principle process model of a chloralkali plant, demonstrating the increased precision of the uncertainty description of the output variables, resulting in 25% increase in accuracy in the chance-constraint calculation.BMWi, 0350013A, ChemEFlex - Umsetzbarkeitsanalyse zur Lastflexibilisierung elektrochemischer Verfahren in der Industrie; Teilvorhaben: Modellierung der Chlor-Alkali-Elektrolyse sowie anderer Prozesse und deren Bewertung hinsichtlich Wirtschaftlichkeit und möglicher HemmnisseDFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli

    Understanding human factors to improve occupational safety in manufacturing: a case study

    Get PDF
    This paper investigates how the deviation of an industrial process from its optimal productivity, maintenance, and quality levels can lead to safety issues. An integrated approach was developed in 2021 to analyze the correlation between safety deficie- ncies and process inefficiencies. In this study, the proposed approach was adopted, aiming to identify potential connections between the safety issues that emerged from the previous investigations and the process inefficiencies. A case study describes the application of the proposed approach in an Italian company leader in the production of boilers for domestic and industrial heating and cooling systems. The findings show that the joint analysis of the results from the investigations in the proposed approach allows understanding the human factors in the investigated manufacturing process, i.e. the environmental, organizational, and job factors, and the human and individual characteristics which influence behavior at work in a way which can affect occupational safety

    Feature Based Machine Tool Accuracy Analysis Method

    Get PDF
    AbstractMachine tool accuracy is the most important performance parameters which affect the part quality. At present, a systematic machine tool accuracy evaluation method is necessary for the machine tool selection in process planning and shop-floor scheduling. This paper proposes an efficient feature based machine tool accuracy analysis method to enable machine tool capability evaluation about accuracy, and the mapping from the machine tool accuracy to the part feature tolerance is established in this method. The cutter is used as a bridge to transform the machine tool error to feature tolerance. The deviation of the cutter between the actual position & orientation and the nominal position & orientation is converted from the machine tool error according to the rigid body kinematics method. Then the feature error in the form of GD&T is calculated from the profile of the feature and the deviation of the cutter. A prototype system has been developed based on this research. An industrial case study shows that the methodology is effective

    Improving sub-assembly Productivity, Efficiency and Quality with Lean Six Sigma tools

    Get PDF
    This thesis is written for the company as a case study for improving the productivity, efficiency and quality of a module assembly. The main objective of the study was to investigate the current situation of the assembly process, so that necessary improvements could be done. In parallel with this objective, a project called Andon 2.0 was done. The purpose of this project was to implement a system, which would make it more convenient for the operators to report progress and deviations in the process. The methodology used in this case study is based on the DMAIC cycle used in Lean Six Sigma. Prior to the cycle, a theoretical background about the current situation, terminology and methodology is given. In the Define phase the framework with the objectives and limitations for the study were formed. The cycle continued with the measure phase, in which the data from the internal ERP system was sourced. The results chapter contains the Analysis phase where the pro-cess is analysed statistically with the Pareto principle and by multiple regression analysis. The assembly process is also monitored directly. The Improve phase contained improvement proposals for all three aspects considered in this study. The last phase was customised for including the motivation, evaluation and design of the Andon 2.0 pilot version. The process contains currently many flaws, of which the inconsistency for notifying logistic personnel of a deviation due to logistical errors conveys inefficiency. This issue was dealt with by proposing a design for the Andon 2.0 pilot version, which reduces the response time and provides process developers more accurate date for future improvements
    corecore