865,014 research outputs found

    Expansion of the whole wheat flour extrusion

    Get PDF
    A new model framework is proposed to describe the expansion of extrudates with extruder operating conditions based on dimensional analysis principle. The Buckingham pi dimensional analysis method is applied to form the basic structure of the model from extrusion process operational parameters. Using the Central Composite Design (CCD) method, whole wheat flour was processed in a twin-screw extruder with 16 trials. The proposed model can well correlate the expansion of the 16 trials using 3 regression parameters. The average deviation of the correlation is 5.9%

    Large Deviations for Stochastic Partial Differential Equations Driven by a Poisson Random Measure

    Get PDF
    Stochastic partial differential equations driven by Poisson random measures (PRM) have been proposed as models for many different physical systems, where they are viewed as a refinement of a corresponding noiseless partial differential equations (PDE). A systematic framework for the study of probabilities of deviations of the stochastic PDE from the deterministic PDE is through the theory of large deviations. The goal of this work is to develop the large deviation theory for small Poisson noise perturbations of a general class of deterministic infinite dimensional models. Although the analogous questions for finite dimensional systems have been well studied, there are currently no general results in the infinite dimensional setting. This is in part due to the fact that in this setting solutions may have little spatial regularity, and thus classical approximation methods for large deviation analysis become intractable. The approach taken here, which is based on a variational representation for nonnegative functionals of general PRM, reduces the proof of the large deviation principle to establishing basic qualitative properties for controlled analogues of the underlying stochastic system. As an illustration of the general theory, we consider a particular system that models the spread of a pollutant in a waterway.Comment: To appear in Stochastic Process and Their Application

    Sequential control of time series by functionals of kernel-weighted empirical processes under local alternatives

    Get PDF
    Motivated in part by applications in model selection in statistical genetics and sequential monitoring of financial data, we study an empirical process framework for a class of stopping rules which rely on kernel-weighted averages of past data. We are interested in the asymptotic distribution for time series data and an analysis of the joint influence of the smoothing policy and the alternative defining the deviation from the null model (in-control state). We employ a certain type of local alternative which provides meaningful insights. Our results hold true for short memory processes which satisfy a weak mixing condition. By relying on an empirical process framework we obtain both asymptotic laws for the classical fixed sample design and the sequential monitoring design. As a by-product we establish the asymptotic distribution of the Nadaraya-Watson kernel smoother when the regressors do not get dense as the sample size increases. --

    Uncertainty Analysis for Data-Driven Chance-Constrained Optimization

    Get PDF
    In this contribution our developed framework for data-driven chance-constrained optimization is extended with an uncertainty analysis module. The module quantifies uncertainty in output variables of rigorous simulations. It chooses the most accurate parametric continuous probability distribution model, minimizing deviation between model and data. A constraint is added to favour less complex models with a minimal required quality regarding the fit. The bases of the module are over 100 probability distribution models provided in the Scipy package in Python, a rigorous case-study is conducted selecting the four most relevant models for the application at hand. The applicability and precision of the uncertainty analyser module is investigated for an impact factor calculation in life cycle impact assessment to quantify the uncertainty in the results. Furthermore, the extended framework is verified with data from a first principle process model of a chloralkali plant, demonstrating the increased precision of the uncertainty description of the output variables, resulting in 25% increase in accuracy in the chance-constraint calculation.BMWi, 0350013A, ChemEFlex - Umsetzbarkeitsanalyse zur Lastflexibilisierung elektrochemischer Verfahren in der Industrie; Teilvorhaben: Modellierung der Chlor-Alkali-Elektrolyse sowie anderer Prozesse und deren Bewertung hinsichtlich Wirtschaftlichkeit und möglicher HemmnisseDFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische UniversitÀt Berli

    Optimal Attack against Cyber-Physical Control Systems with Reactive Attack Mitigation

    Full text link
    This paper studies the performance and resilience of a cyber-physical control system (CPCS) with attack detection and reactive attack mitigation. It addresses the problem of deriving an optimal sequence of false data injection attacks that maximizes the state estimation error of the system. The results provide basic understanding about the limit of the attack impact. The design of the optimal attack is based on a Markov decision process (MDP) formulation, which is solved efficiently using the value iteration method. Using the proposed framework, we quantify the effect of false positives and mis-detections on the system performance, which can help the joint design of the attack detection and mitigation. To demonstrate the use of the proposed framework in a real-world CPCS, we consider the voltage control system of power grids, and run extensive simulations using PowerWorld, a high-fidelity power system simulator, to validate our analysis. The results show that by carefully designing the attack sequence using our proposed approach, the attacker can cause a large deviation of the bus voltages from the desired setpoint. Further, the results verify the optimality of the derived attack sequence and show that, to cause maximum impact, the attacker must carefully craft his attack to strike a balance between the attack magnitude and stealthiness, due to the simultaneous presence of attack detection and mitigation

    Process-induced skew reduction in nominal zero-skew clock trees

    Full text link
    Abstract — This work develops an analytic framework for clock tree analysis considering process variations that is shown to correspond well with Monte Carlo results. The analysis frame-work is used in a new algorithm that constructs deterministic nominal zero-skew clock trees that have reduced sensitivity to process variation. The new algorithm uses a sampling approach to perform route embedding during a bottom-up merging phase, but does not select the best embedding until the top-down phase. This results in clock trees that exhibit a mean skew reduction of 32.4 % on average and a standard deviation reduction of 40.7 % as verified by Monte Carlo. The average increase in total clock tree capacitance is less than 0.02%. I

    Interactive Quality Inspection of Measured Deviations in Sheet Metal Assemblies

    Get PDF
    We present an exploratory data analysis approach for finite element (FE) simulations to interactively inspect measured deviations in sheet metals arising in automotive applications. Exterior car body parts consist of large visible surfaces, and strict tolerances must be met by them to satisfy both aesthetic requirements and quality performance requirements. To fulfill quality requirements like gap and flushness, exterior vehicle components have adjustable mechanical boundaries. These boundaries are used to influence the shape and position of a sheet metal part relative to its chassis. We introduce a method that supports an inspection engineer with an interactive framework that makes possible a detailed analysis of measured sheet metal deviation fields generated from 3D scans. An engineer can interactively change boundary conditions and obtains the resulting deviation field in real-time. Thus, it is possible to determine viable and desirable adjustments efficiently, leading to time and cost savings in the assembly process

    Search for Light Higgs Boson in the Yukawa process

    Get PDF
    The Yukawa process e+e- --> f fbar h/A, where f is a tau lepton or a b quark, is used to search for light scalar or pseudoscalar Higgs bosons in the framework of general two-Higgs-doublet models. The analysis is based on the data sample collected by the ALEPH experiment at LEP at centre-of-mass energies at and around the Z peak. Since no deviation from the standard model expectations has been observed in the data, this search results in a new 95 excluded region of the (mA, tan beta) plane in any two-Higgs-doublet model
    • 

    corecore