193,223 research outputs found
Recommended from our members
An Assessment of PIER Electric Grid Research 2003-2014 White Paper
This white paper describes the circumstances in California around the turn of the 21st century that led the California Energy Commission (CEC) to direct additional Public Interest Energy Research funds to address critical electric grid issues, especially those arising from integrating high penetrations of variable renewable generation with the electric grid. It contains an assessment of the beneficial science and technology advances of the resultant portfolio of electric grid research projects administered under the direction of the CEC by a competitively selected contractor, the University of California’s California Institute for Energy and the Environment, from 2003-2014
Supporting group maintenance through prognostics-enhanced dynamic dependability prediction
Condition-based maintenance strategies adapt maintenance planning through the integration of online condition monitoring of assets. The accuracy and cost-effectiveness of these strategies can be improved by integrating prognostics predictions and grouping maintenance actions respectively. In complex industrial systems, however, effective condition-based maintenance is intricate. Such systems are comprised of repairable assets which can fail in different ways, with various effects, and typically governed by dynamics which include time-dependent and conditional events. In this context, system reliability prediction is complex and effective maintenance planning is virtually impossible prior to system deployment and hard even in the case of condition-based maintenance. Addressing these issues, this paper presents an online system maintenance method that takes into account the system dynamics. The method employs an online predictive diagnosis algorithm to distinguish between critical and non-critical assets. A prognostics-updated method for predicting the system health is then employed to yield well-informed, more accurate, condition-based suggestions for the maintenance of critical assets and for the group-based reactive repair of non-critical assets. The cost-effectiveness of the approach is discussed in a case study from the power industry
Aging concrete structures: a review of mechanics and concepts
The safe and cost-efficient management of our built infrastructure is a challenging task considering the expected service life of at least 50 years. In spite of time-dependent changes in material properties, deterioration processes and changing demand by society, the structures need to satisfy many technical requirements related to serviceability, durability, sustainability and bearing capacity. This review paper summarizes the challenges associated with the safe design and maintenance of aging concrete structures and gives an overview of some concepts and approaches that are being developed to address these challenges
Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response
A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued
conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The
fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic
system model class: a set of input-output probability models for the structure and a prior probability distribution over this set
that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic
structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive
analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if
structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness
to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates
weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more
complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of
asymptotic approximation or Markov Chain Monte Carlo algorithms
Seismic reliability assessment of classical columns subjected to near-fault ground motions
A methodology for the performance-based seismic risk assessment of classical
columns is presented. Despite their apparent instability, classical columns
are, in general, earthquake resistant, as proven from the fact that many
classical monuments have survived many strong earthquakes over the centuries.
Nevertheless, the quantitative assessment of their reliability and the
understanding of their dynamic behavior are not easy, because of the
fundamental nonlinear character and the sensitivity of their response. In this
paper, a seismic risk assessment is performed for a multidrum column using
Monte Carlo simulation with synthetic ground motions. The ground motions
adopted contain a high- and low-frequency component, combining the stochastic
method, and a simple analytical pulse model to simulate the directivity pulse
contained in near source ground motions. The deterministic model for the
numerical analysis of the system is three-dimensional and is based on the
Discrete Element Method. Fragility curves are produced conditional on magnitude
and distance from the fault and also on scalar intensity measures for two
engineering demand parameters, one concerning the intensity of the response
during the ground shaking and the other the residual deformation of the column.
Three performance levels are assigned to each engineering demand parameter.
Fragility analysis demonstrated some of the salient features of these spinal
systems under near-fault seismic excitations, as for example, their decreased
vulnerability for very strong earthquakes of magnitude 7 or larger. The
analysis provides useful results regarding the seismic reliability of classical
monuments and decision making during restoration process
Alternative sweetener from curculigo fruits
This study gives an overview on the advantages of Curculigo Latifolia as an alternative sweetener and a health product. The purpose of this research is to provide another option to the people who suffer from diabetes. In this research, Curculigo Latifolia was chosen, due to its unique properties and widely known species in Malaysia. In order to obtain the sweet protein from the fruit, it must go through a couple of procedures. First we harvested the fruits from the Curculigo trees that grow wildly in the garden. Next, the Curculigo fruits were dried in the oven at 50 0C for 3 days. Finally, the dried fruits were blended in order to get a fine powder. Curculin is a sweet protein with a taste-modifying activity of converting sourness to sweetness. The curculin content from the sample shown are directly proportional to the mass of the Curculigo fine powder. While the FTIR result shows that the sample spectrum at peak 1634 cm–1 contains secondary amines. At peak 3307 cm–1 contains alkynes
Cross-layer system reliability assessment framework for hardware faults
System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
Impact Assessment of Hypothesized Cyberattacks on Interconnected Bulk Power Systems
The first-ever Ukraine cyberattack on power grid has proven its devastation
by hacking into their critical cyber assets. With administrative privileges
accessing substation networks/local control centers, one intelligent way of
coordinated cyberattacks is to execute a series of disruptive switching
executions on multiple substations using compromised supervisory control and
data acquisition (SCADA) systems. These actions can cause significant impacts
to an interconnected power grid. Unlike the previous power blackouts, such
high-impact initiating events can aggravate operating conditions, initiating
instability that may lead to system-wide cascading failure. A systemic
evaluation of "nightmare" scenarios is highly desirable for asset owners to
manage and prioritize the maintenance and investment in protecting their
cyberinfrastructure. This survey paper is a conceptual expansion of real-time
monitoring, anomaly detection, impact analyses, and mitigation (RAIM) framework
that emphasizes on the resulting impacts, both on steady-state and dynamic
aspects of power system stability. Hypothetically, we associate the
combinatorial analyses of steady state on substations/components outages and
dynamics of the sequential switching orders as part of the permutation. The
expanded framework includes (1) critical/noncritical combination verification,
(2) cascade confirmation, and (3) combination re-evaluation. This paper ends
with a discussion of the open issues for metrics and future design pertaining
the impact quantification of cyber-related contingencies
- …