196,489 research outputs found

    A Process to Implement an Artificial Neural Network and Association Rules Techniques to Improve Asset Performance and Energy Efficiency

    Get PDF
    In this paper, we address the problem of asset performance monitoring, with the intention of both detecting any potential reliability problem and predicting any loss of energy consumption e ciency. This is an important concern for many industries and utilities with very intensive capitalization in very long-lasting assets. To overcome this problem, in this paper we propose an approach to combine an Artificial Neural Network (ANN) with Data Mining (DM) tools, specifically with Association Rule (AR) Mining. The combination of these two techniques can now be done using software which can handle large volumes of data (big data), but the process still needs to ensure that the required amount of data will be available during the assets’ life cycle and that its quality is acceptable. The combination of these two techniques in the proposed sequence di ers from previous works found in the literature, giving researchers new options to face the problem. Practical implementation of the proposed approach may lead to novel predictive maintenance models (emerging predictive analytics) that may detect with unprecedented precision any asset’s lack of performance and help manage assets’ O&M accordingly. The approach is illustrated using specific examples where asset performance monitoring is rather complex under normal operational conditions.Ministerio de Economía y Competitividad DPI2015-70842-

    Statistical modelling of software reliability

    Get PDF
    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Grid infrastructures for secure access to and use of bioinformatics data: experiences from the BRIDGES project

    Get PDF
    The BRIDGES project was funded by the UK Department of Trade and Industry (DTI) to address the needs of cardiovascular research scientists investigating the genetic causes of hypertension as part of the Wellcome Trust funded (£4.34M) cardiovascular functional genomics (CFG) project. Security was at the heart of the BRIDGES project and an advanced data and compute grid infrastructure incorporating latest grid authorisation technologies was developed and delivered to the scientists. We outline these grid infrastructures and describe the perceived security requirements at the project start including data classifications and how these evolved throughout the lifetime of the project. The uptake and adoption of the project results are also presented along with the challenges that must be overcome to support the secure exchange of life science data sets. We also present how we will use the BRIDGES experiences in future projects at the National e-Science Centre

    Farming profitably in a changing climate: a risk management approach

    Get PDF
    Climate science has made enormous progress over the last two decades in understanding the nature of earth's climate and the changes that are taking place. Under climate change projections, we can say with some confidence that the Australian climate will continue to become hotter, and temperature-related extreme events are likely to increase in frequency. However, we cannot yet project with any reasonable level of confidence changes to rainfall and the occurrence of drought. So although there is strong evidence for the reality of climate change, there is still considerable uncertainty associated with projections of precisely how climate change will unfold in the future, particularly at regional and local scales where most farming management decisions are made. Adapting to such an uncertain future demands a flexible approach based on assessing, analysing and responding to the risks posed by a changing climate. This paper examines a risk management approach to farming in a variable and changing climate, based on experience gained in the insurance industry which is one of the first major industries to be impacted by climate change losses. Governments, businesses and individuals must consider the implications of a variable and changing climate as a normal part of decision-making based on risk, just as they would for other risks, such as market price and fuel price movements, labour costs etc. The paper also discusses briefly how advances in information technology have enabled information to be accessed and widely distributed, and showcases four best practice spatial IT website tools developed by the BRS to assist farmers and policy makers to manage risk - the National Agricultural Monitoring System (NAMS), the Meat and Livestock Australia (MLA) Rainfall to Pasture Growth Outlook Tool, the Multi-Criteria Analysis Shell (MCAS-S), and the Rainfall Reliability Wizard. There are also several tools under current development in BRS which continue with this theme. These are Water 2010 - National Water Balance and Information for Policy and Planning, the Climate Change Wizard and Climate Change Impacted Data Sets.Climate change, risk management, Environmental Economics and Policy, Risk and Uncertainty,

    Evaluating strategies for implementing industry 4.0: a hybrid expert oriented approach of B.W.M. and interval valued intuitionistic fuzzy T.O.D.I.M.

    Get PDF
    open access articleDeveloping and accepting industry 4.0 influences the industry structure and customer willingness. To a successful transition to industry 4.0, implementation strategies should be selected with a systematic and comprehensive view to responding to the changes flexibly. This research aims to identify and prioritise the strategies for implementing industry 4.0. For this purpose, at first, evaluation attributes of strategies and also strategies to put industry 4.0 in practice are recognised. Then, the attributes are weighted to the experts’ opinion by using the Best Worst Method (BWM). Subsequently, the strategies for implementing industry 4.0 in Fara-Sanat Company, as a case study, have been ranked based on the Interval Valued Intuitionistic Fuzzy (IVIF) of the TODIM method. The results indicated that the attributes of ‘Technology’, ‘Quality’, and ‘Operation’ have respectively the highest importance. Furthermore, the strategies for “new business models development’, ‘Improving information systems’ and ‘Human resource management’ received a higher rank. Eventually, some research and executive recommendations are provided. Having strategies for implementing industry 4.0 is a very important solution. Accordingly, multi-criteria decision-making (MCDM) methods are a useful tool for adopting and selecting appropriate strategies. In this research, a novel and hybrid combination of BWM-TODIM is presented under IVIF information

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi
    corecore