37,912 research outputs found
Rapid gravity filtration operational performance assessment and diagnosis for preventative maintenance from on-line data
Rapid gravity filters, the final particulate barrier in many water treatment systems, are typically monitored using on-line turbidity, flow and head loss instrumentation. Current metrics for assessing filtration performance from on-line turbidity data were critically assessed and observed not to effectively and consistently summarise the important properties of a turbidity distribution and the associated water quality risk. In the absence of a consistent risk function for turbidity in treated water, using on-line turbidity as an indicative rather than a quantitative variable appears to be more practical. Best practice suggests that filtered water turbidity should be maintained below 0.1 NTU, at higher turbidity we can be less confident of an effective particle and pathogen barrier. Based on this simple distinction filtration performance has been described in terms of reliability and resilience by characterising the likelihood, frequency and duration of turbidity spikes greater than 0.1 NTU. This view of filtration performance is then used to frame operational diagnosis of unsatisfactory performance in terms of a machine learning classification problem. Through calculation of operationally relevant predictor variables and application of the Classification and Regression Tree (CART) algorithm the conditions associated with the greatest risk of poor filtration performance can be effectively modelled and communicated in operational terms. This provides a method for an evidence based decision support which can be used to efficiently manage individual pathogen barriers in a multi-barrier system
Energy rating of a water pumping station using multivariate analysis
Among water management policies, the preservation and the saving of energy demand in water supply and treatment systems play key roles. When focusing on energy, the customary metric to determine the performance of water supply systems is linked to the definition of component-based energy indicators. This approach is unfit to account for interactions occurring among system elements or between the system and its environment. On the other hand, the development of information technology has led to the availability of increasing large amount of data, typically gathered from distributed sensor networks in so-called smart grids. In this context, data intensive methodologies address the possibility of using complex network modeling approaches, and advocate the issues related to the interpretation and analysis of large amount of data produced by smart sensor networks.
In this perspective, the present work aims to use data intensive techniques in the energy analysis of a water management network.
The purpose is to provide new metrics for the energy rating of the system and to be able to provide insights into the dynamics of its operations. The study applies neural network as a tool to predict energy demand, when using flowrate and vibration data as predictor variables
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Analyzing the test process using structural coverage
A large, commercially developed FORTRAN program was modified to produce structural coverage metrics. The modified program was executed on a set of functionally generated acceptance tests and a large sample of operational usage cases. The resulting structural coverage metrics are combined with fault and error data to evaluate structural coverage. It was shown that in the software environment the functionally generated tests seem to be a good approximation of operational use. The relative proportions of the exercised statement subclasses change as the structural coverage of the program increases. A method was also proposed for evaluating if two sets of input data exercise a program in a similar manner. Evidence was provided that implies that in this environment, faults revealed in a procedure are independent of the number of times the procedure is executed and that it may be reasonable to use procedure coverage in software models that use statement coverage. Finally, the evidence suggests that it may be possible to use structural coverage to aid in the management of the acceptance test processed
The safety case and the lessons learned for the reliability and maintainability case
This paper examine the safety case and the lessons learned for the reliability and maintainability case
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
Recommended from our members
Project Controls and Management Systems : current practice and how it has changed over the past decade
Project Controls and Management System (PCMS) refers to an ecosystem of processes, tools and personnel required for the proper planning and execution of capital projects throughout the different phases of design, procurement, construction and startup. This can be divided into different focus areas (functions) that would include Estimating, Planning, Scheduling, Cost Control, Change Management, Progressing, and Forecasting. Various trends such as globalization, contractor specialization and information technology developments have impacted the way PCMS are implemented and made it the subject of extensive research over the past years to investigate how to best utilize those trends. Replicating the research methodology used in a 2011 report published by the Construction Research Institute (CII), this work aims to investigate the current status of PCMS implementation and how it has changed over the past decade. It was concluded that while the original PCMS principles are still valid, adoption has drastically changed in terms of efficiency for the majority of the functions. The research also identifies areas of potential concerns and provides recommendations for further improvement.Civil, Architectural, and Environmental Engineerin
Management issues in systems engineering
When applied to a system, the doctrine of successive refinement is a divide-and-conquer strategy. Complex systems are sucessively divided into pieces that are less complex, until they are simple enough to be conquered. This decomposition results in several structures for describing the product system and the producing system. These structures play important roles in systems engineering and project management. Many of the remaining sections in this chapter are devoted to describing some of these key structures. Structures that describe the product system include, but are not limited to, the requirements tree, system architecture and certain symbolic information such as system drawings, schematics, and data bases. The structures that describe the producing system include the project's work breakdown, schedules, cost accounts and organization
- …