14,973 research outputs found

    A Unique Hybrid Propulsion System Design for Large Space Boosters

    Get PDF
    A study was made of the application of hybrid rocket propulsion technology to large space boosters. Safety, reliability, cost, and performance comprised the evaluation criteria, in order of relative importance, for this study. The effort considered the so called classic hybrid design approach versus a novel approach which utilizes a fuel-rich gas generator for the fuel source. Other trades included various fuel/oxidizer combinations, pressure-fed versus pump fed oxidizer delivery systems, and reusable versus expandable booster systems. Following this initial trade study, a point design was generated. A gas generated-type fuel grain with pump fed liquid oxygen comprised the basis of this point design. This design study provided a mechanism for considering the means of implementing the gas generator approach for further defining details of the design. Subsequently, a system trade study was performed which determined the sensitivity of the design to various design parameters and predicted optimum values for these same parameters. The study concluded that a gas generator hybrid booster design offers enhanced safety and reliability over current of proposed solid booster designs while providing equal or greater performance levels. These improvements can be accomplished at considerably lower cost than for the liquid booster designs of equivalent capability

    Optimal modelling and experimentation for the improved sustainability of microfluidic chemical technology design

    Get PDF
    Optimization of the dynamics and control of chemical processes holds the promise of improved sustainability for chemical technology by minimizing resource wastage. Anecdotally, chemical plant may be substantially over designed, say by 35-50%, due to designers taking account of uncertainties by providing greater flexibility. Once the plant is commissioned, techniques of nonlinear dynamics analysis can be used by process systems engineers to recoup some of this overdesign by optimization of the plant operation through tighter control. At the design stage, coupling the experimentation with data assimilation into the model, whilst using the partially informed, semi-empirical model to predict from parametric sensitivity studies which experiments to run should optimally improve the model. This approach has been demonstrated for optimal experimentation, but limited to a differential algebraic model of the process. Typically, such models for online monitoring have been limited to low dimensions. Recently it has been demonstrated that inverse methods such as data assimilation can be applied to PDE systems with algebraic constraints, a substantially more complicated parameter estimation using finite element multiphysics modelling. Parametric sensitivity can be used from such semi-empirical models to predict the optimum placement of sensors to be used to collect data that optimally informs the model for a microfluidic sensor system. This coupled optimum modelling and experiment procedure is ambitious in the scale of the modelling problem, as well as in the scale of the application - a microfluidic device. In general, microfluidic devices are sufficiently easy to fabricate, control, and monitor that they form an ideal platform for developing high dimensional spatio-temporal models for simultaneously coupling with experimentation. As chemical microreactors already promise low raw materials wastage through tight control of reagent contacting, improved design techniques should be able to augment optimal control systems to achieve very low resource wastage. In this paper, we discuss how the paradigm for optimal modelling and experimentation should be developed and foreshadow the exploitation of this methodology for the development of chemical microreactors and microfluidic sensors for online monitoring of chemical processes. Improvement in both of these areas bodes to improve the sustainability of chemical processes through innovative technology. (C) 2008 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved

    Regression Monte Carlo for Microgrid Management

    Full text link
    We study an islanded microgrid system designed to supply a small village with the power produced by photovoltaic panels, wind turbines and a diesel generator. A battery storage system device is used to shift power from times of high renewable production to times of high demand. We introduce a methodology to solve microgrid management problem using different variants of Regression Monte Carlo algorithms and use numerical simulations to infer results about the optimal design of the grid.Comment: CEMRACS 2017 Summer project - proceedings

    Low weight additive manufacturing FBG accelerometer: design, characterization and testing

    Get PDF
    Structural Health Monitoring is considered the process of damage detection and structural characterization by any type of on-board sensors. Fibre Bragg Gratings (FBG) are increasing their popularity due to their many advantages like easy multiplexing, negligible weight and size, high sensitivity, inert to electromagnetic fields, etc. FBGs allow obtaining directly strain and temperature, and other magnitudes can also be measured by the adaptation of the Bragg condition. In particular, the acceleration is of special importance for dynamic analysis. In this work, a low weight accelerometer has been developed using a FBG. It consists in a hexagonal lattice hollow cylinder designed with a resonance frequency above 500 Hz. A Finite Element Model (FEM) was used to analyse dynamic behaviour of the sensor. Then, it was modelled in a CAD software and exported to additive manufacturing machines. Finally, a characterization test campaign was carried out obtaining a sensitivity of 19.65 pm/g. As a case study, this paper presents the experimental modal analysis of the wing of an Unmanned Aerial Vehicle. The measurements from piezoelectric, MEMS accelerometers, embedded FBGs sensors and the developed FBG accelerometer are compared.Ministerio de Economía y Competitividad BIA2013-43085-P y BIA2016-75042-C2-1-

    Mechanism Deduction from Noisy Chemical Reaction Networks

    Full text link
    We introduce KiNetX, a fully automated meta-algorithm for the kinetic analysis of complex chemical reaction networks derived from semi-accurate but efficient electronic structure calculations. It is designed to (i) accelerate the automated exploration of such networks, and (ii) cope with model-inherent errors in electronic structure calculations on elementary reaction steps. We developed and implemented KiNetX to possess three features. First, KiNetX evaluates the kinetic relevance of every species in a (yet incomplete) reaction network to confine the search for new elementary reaction steps only to those species that are considered possibly relevant. Second, KiNetX identifies and eliminates all kinetically irrelevant species and elementary reactions to reduce a complex network graph to a comprehensible mechanism. Third, KiNetX estimates the sensitivity of species concentrations toward changes in individual rate constants (derived from relative free energies), which allows us to systematically select the most efficient electronic structure model for each elementary reaction given a predefined accuracy. The novelty of KiNetX consists in the rigorous propagation of correlated free-energy uncertainty through all steps of our kinetic analyis. To examine the performance of KiNetX, we developed AutoNetGen. It semirandomly generates chemistry-mimicking reaction networks by encoding chemical logic into their underlying graph structure. AutoNetGen allows us to consider a vast number of distinct chemistry-like scenarios and, hence, to discuss assess the importance of rigorous uncertainty propagation in a statistical context. Our results reveal that KiNetX reliably supports the deduction of product ratios, dominant reaction pathways, and possibly other network properties from semi-accurate electronic structure data.Comment: 36 pages, 4 figures, 2 table

    Vulnerability Analysis of False Data Injection Attacks on Supervisory Control and Data Acquisition and Phasor Measurement Units

    Get PDF
    abstract: The electric power system is monitored via an extensive network of sensors in tandem with data processing algorithms, i.e., an intelligent cyber layer, that enables continual observation and control of the physical system to ensure reliable operations. This data collection and processing system is vulnerable to cyber-attacks that impact the system operation status and lead to serious physical consequences, including systematic problems and failures. This dissertation studies the physical consequences of unobservable false data injection (FDI) attacks wherein the attacker maliciously changes supervisory control and data acquisition (SCADA) or phasor measurement unit (PMU) measurements, on the electric power system. In this context, the dissertation is divided into three parts, in which the first two parts focus on FDI attacks on SCADA and the last part focuses on FDI attacks on PMUs. The first part studies the physical consequences of FDI attacks on SCADA measurements designed with limited system information. The attacker is assumed to have perfect knowledge inside a sub-network of the entire system. Two classes of attacks with different assumptions on the attacker's knowledge outside of the sub-network are introduced. In particular, for the second class of attacks, the attacker is assumed to have no information outside of the attack sub-network, but can perform multiple linear regression to learn the relationship between the external network and the attack sub-network with historical data. To determine the worst possible consequences of both classes of attacks, a bi-level optimization problem wherein the first level models the attacker's goal and the second level models the system response is introduced. The second part of the dissertation concentrates on analyzing the vulnerability of systems to FDI attacks from the perspective of the system. To this end, an off-line vulnerability analysis framework is proposed to identify the subsets of the test system that are more prone to FDI attacks. The third part studies the vulnerability of PMUs to FDI attacks. Two classes of more sophisticated FDI attacks that capture the temporal correlation of PMU data are introduced. Such attacks are designed with a convex optimization problem and can always bypass both the bad data detector and the low-rank decomposition (LD) detector.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore