105,710 research outputs found
Energy rating of a water pumping station using multivariate analysis
Among water management policies, the preservation and the saving of energy demand in water supply and treatment systems play key roles. When focusing on energy, the customary metric to determine the performance of water supply systems is linked to the definition of component-based energy indicators. This approach is unfit to account for interactions occurring among system elements or between the system and its environment. On the other hand, the development of information technology has led to the availability of increasing large amount of data, typically gathered from distributed sensor networks in so-called smart grids. In this context, data intensive methodologies address the possibility of using complex network modeling approaches, and advocate the issues related to the interpretation and analysis of large amount of data produced by smart sensor networks.
In this perspective, the present work aims to use data intensive techniques in the energy analysis of a water management network.
The purpose is to provide new metrics for the energy rating of the system and to be able to provide insights into the dynamics of its operations. The study applies neural network as a tool to predict energy demand, when using flowrate and vibration data as predictor variables
An Overview of the Use of Neural Networks for Data Mining Tasks
In the recent years the area of data mining has experienced a considerable demand for technologies that extract knowledge from large and complex data sources. There is a substantial commercial interest as well as research investigations in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from datasets. Artificial Neural Networks (NN) are popular biologically inspired intelligent methodologies, whose classification, prediction and pattern recognition capabilities have been utilised successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks
Markers of cognitive function in individuals with metabolic disease: Morquio Syndrome and Tyrosinemia Type III
We characterized cognitive function in two metabolic diseases. MPS–IVa (mucopolysaccharidosis IVa, Morquio) and tyrosinemia type III individuals were assessed using tasks of attention, language and oculomotor function. MPS–IVa individuals were slower in visual search, but the display size effects were normal, and slowing was not due to long reaction times (ruling out slow item processing or distraction). Maintaining gaze in an oculomotor task was difficult. Results implicated sustained attention and task initiation or response processing. Shifting attention, accumulating evidence and selecting targets were unaffected. Visual search was also slowed in tyrosinemia type III, and patterns in visual search and fixation tasks pointed to sustained attention impairments, although there were differences from MPS–IVa. Language was impaired in tyrosinemia type III but not MPS–IVa. Metabolic diseases produced selective cognitive effects. Our results, incorporating new methods for developmental data and model selection, illustrate how cognitive data can contribute to understanding function in biochemical brain systems
Trust beyond reputation: A computational trust model based on stereotypes
Models of computational trust support users in taking decisions. They are
commonly used to guide users' judgements in online auction sites; or to
determine quality of contributions in Web 2.0 sites. However, most existing
systems require historical information about the past behavior of the specific
agent being judged. In contrast, in real life, to anticipate and to predict a
stranger's actions in absence of the knowledge of such behavioral history, we
often use our "instinct"- essentially stereotypes developed from our past
interactions with other "similar" persons. In this paper, we propose
StereoTrust, a computational trust model inspired by stereotypes as used in
real-life. A stereotype contains certain features of agents and an expected
outcome of the transaction. When facing a stranger, an agent derives its trust
by aggregating stereotypes matching the stranger's profile. Since stereotypes
are formed locally, recommendations stem from the trustor's own personal
experiences and perspective. Historical behavioral information, when available,
can be used to refine the analysis. According to our experiments using
Epinions.com dataset, StereoTrust compares favorably with existing trust models
that use different kinds of information and more complete historical
information
An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building
This report describes a state-of-the-art performance-based earthquake engineering methodology
that is used to assess the seismic performance of a four-story reinforced concrete (RC) office
building that is generally representative of low-rise office buildings constructed in highly seismic
regions of California. This “benchmark” building is considered to be located at a site in the Los
Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its
seismic lateral system that was designed according to modern building codes and standards. The
building’s performance is quantified in terms of structural behavior up to collapse, structural and
nonstructural damage and associated repair costs, and the risk of fatalities and their associated
economic costs. To account for different building configurations that may be designed in
practice to meet requirements of building size and use, eight structural design alternatives are
used in the performance assessments.
Our performance assessments account for important sources of uncertainty in the ground
motion hazard, the structural response, structural and nonstructural damage, repair costs, and
life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic
seismic hazard analysis and the evaluation of controlling seismic sources (through
disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to
2475 years). Innovative procedures for ground motion selection and scaling are used to develop
acceleration time history suites corresponding to each of the seven ground motion levels.
Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural
modeling uncertainties are investigated through comparison of these two modeling approaches,
and through variations in structural component modeling parameters (stiffness, deformation
capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on
a combination of test data, observations from post-earthquake reconnaissance, and expert
opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural
building components, including wallboard partitions, interior paint, exterior glazing, ceilings,
sprinkler systems, and elevators. The risk of casualties and the associated economic costs are
evaluated based on the risk of structural collapse, combined with recent models on earthquake
fatalities in collapsed buildings and accepted economic modeling guidelines for the value of
human life in loss and cost-benefit studies.
The principal results of this work pertain to the building collapse risk, damage and repair
cost, and life-safety risk. These are discussed successively as follows.
When accounting for uncertainties in structural modeling and record-to-record variability
(i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of
the various designs range from 2% to 7% for earthquake ground motions that have a 2%
probability of exceedance in 50 years (2475 years return period). When integrated with the
ground motion hazard for the southern California site, the collapse probabilities result in mean
annual frequencies of collapse in the range of [0.4 to 1.4]x10
-4
for the various benchmark
building designs. In the development of these results, we made the following observations that
are expected to be broadly applicable:
(1) The ground motions selected for performance simulations must consider spectral
shape (e.g., through use of the epsilon parameter) and should appropriately account for
correlations between motions in both horizontal directions;
(2) Lower-bound component models, which are commonly used in performance-based
assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is
more appropriate to use median component behavior, including all aspects of the component
model (strength, stiffness, deformation capacity, cyclic deterioration, etc.);
(3) Structural modeling uncertainties related to component deformation capacity and
post-peak degrading stiffness can impact the variability of calculated collapse probabilities and
mean annual rates to a similar degree as record-to-record variability of ground motions.
Therefore, including the effects of such structural modeling uncertainties significantly increases
the mean annual collapse rates. We found this increase to be roughly four to eight times relative
to rates evaluated for the median structural model;
(4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the
most common of which was a story mechanism in the third story (differing from the multi-story
mechanism predicted by nonlinear static pushover analysis);
(5) Soil-foundation-structure interaction effects did not significantly affect the structural
response, which was expected given the relatively flexible superstructure and stiff soils.
The potential for financial loss is considerable. Overall, the calculated expected annual
losses (EAL) are in the range of 97,000 for the various code-conforming benchmark
building designs, or roughly 1% of the replacement cost of the building (3.5M, the fatality rate translates to an EAL due to
fatalities of 5,600 for the code-conforming designs, and 66,000, the monetary value associated with life loss is small,
suggesting that the governing factor in this respect will be the maximum permissible life-safety
risk deemed by the public (or its representative government) to be appropriate for buildings.
Although the focus of this report is on one specific building, it can be used as a reference
for other types of structures. This report is organized in such a way that the individual core
chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the
performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the
implementation of the PBEE methodology of the PEER framework, as applied to the benchmark
building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis
(Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained,
readers interested in additional details can find them in the appendices
- …