2,105 research outputs found

    Overview of Remaining Useful Life prediction techniques in Through-life Engineering Services

    Get PDF
    Through-life Engineering Services (TES) are essential in the manufacture and servicing of complex engineering products. TES improves support services by providing prognosis of run-to-failure and time-to-failure on-demand data for better decision making. The concept of Remaining Useful Life (RUL) is utilised to predict life-span of components (of a service system) with the purpose of minimising catastrophic failure events in both manufacturing and service sectors. The purpose of this paper is to identify failure mechanisms and emphasise the failure events prediction approaches that can effectively reduce uncertainties. It will demonstrate the classification of techniques used in RUL prediction for optimisation of products’ future use based on current products in-service with regards to predictability, availability and reliability. It presents a mapping of degradation mechanisms against techniques for knowledge acquisition with the objective of presenting to designers and manufacturers ways to improve the life-span of components

    The adoption and use of Through-life Engineering Services within UK Manufacturing Organisations

    Get PDF
    Manufacturing organisations seek ever more innovative approaches in order to maintain and improve their competitive position within the global market. One such initiative that is gaining significance is ‘through-life engineering services’. These seek to adopt ‘whole life’ service support through the greater understanding of component and system performance driven by knowledge gained from maintenance, repair and overhaul activities. This research presents the findings of exploratory research based on a survey of UK manufacturers who provide through-life engineering services. The survey findings illustrate significant issues to be addressed within the field before the concept becomes widely accepted. These include a more proactive approach to maintenance activities based on real-time responses; standardisation of data content, structure, collection, storage and retrieval protocols in support of maintenance; the development of clear definitions, ontologies and a taxonomy of through-life engineering services in support of the service delivery system; lack of understanding of component and system performance due to the presence of ‘No Fault Found’ events that skew maintenance metrics and the increased use of radio-frequency identification technology in support of maintenance data acquisition

    Development of an ontology for aerospace engine components degradation in service

    Get PDF
    This paper presents the development of an ontology for component service degradation. In this paper, degradation mechanisms in gas turbine metallic components are used for a case study to explain how a taxonomy within an ontology can be validated. The validation method used in this paper uses an iterative process and sanity checks. Data extracted from on-demand textual information are filtered and grouped into classes of degradation mechanisms. Various concepts are systematically and hierarchically arranged for use in the service maintenance ontology. The allocation of the mechanisms to the AS-IS ontology presents a robust data collection hub. Data integrity is guaranteed when the TO-BE ontology is introduced to analyse processes relative to various failure events. The initial evaluation reveals improvement in the performance of the TO-BE domain ontology based on iterations and updates with recognised mechanisms. The information extracted and collected is required to improve service k nowledge and performance feedback which are important for service engineers. Existing research areas such as natural language processing, knowledge management, and information extraction were also examined

    Service Knowledge Capture and Reuse

    Get PDF
    The keynote will start with the need for service knowledge capture and reuse for industrial product-service systems. A novel approach to capture the service damage knowledge about individual component will be presented with experimental results. The technique uses active thermography and image processing approaches for the assessment. The paper will also give an overview of other non-destructive inspection techniques for service damage assessment. A robotic system will be described to automate the damage image capture. The keynote will then propose ways to reuse the knowledge to predict remaining life of the component and feedback to design and manufacturing

    LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    Get PDF
    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular

    P104 White coat hypertension is associated with increased small vessel disease in the brain

    Get PDF
    Objective: Small vessel disease, measured by brain white matter hyperintensity (WMH), is associated with increased stroke risk and cognitive impairment. This study aimed to explore the relationship between WMH on computerised tomography (CT) and white coat hypertension (WCH) in patients with recent transient ischaemic attack (TIA) or lacunar stroke (LS). Methods: Ninety-six patients recruited for the ASIST trial (Arterial Stiffness in Lacunar Stroke and TIA) underwent measurement of clinic blood pressure (BP) and ambulatory BP monitoring (APBM) within two weeks of TIA or LS. Patients were grouped by BP phenotypes. Twenty-three patients had normotension (clinic BP 140/90 mmHg and day-time ABPM <135/85 mmHg). CT brain images were scored for WMH using the four-point Fazekas visual rating scale. Patients were grouped into no-mild WMH (scores 0–1) or moderate-severe (scores 2–3) groups. The relationship between BP and WMH was explored with chi-square and logistic regression accounting for known cardiovascular risk factors (age, gender, smoking, diabetes and hyperlipidaemia). Results: 44% of WCH patients had moderate-severe WMH compared to 17% of normotensives (p = 0.047). Logistical regression incorporating WCH as the independent factor and cardiovascular risk factors as independent variables showed WCH to be the only independent significant factor contributing to WMH (p = 0.024). Conclusion: Patients with WCH were more likely to have moderate-severe WMH on CT brain than normotensives. WCH was associated with increased WMH, independent of other cardiovascular risk factors. This study suggests that WCH is associated with increased small vessel disease in the brain and may benefit from treatment

    Eu <sup>3+</sup> Sequestration by Biogenic Nano-Hydroxyapatite Synthesized at Neutral and Alkaline pH

    Get PDF
    <p>Biogenic hydroxyapatite (bio-HA) has the potential for radionuclide capture and remediation of metal-contaminated environments. Biosynthesis of bio-HA was achieved via the phosphatase activity of a <i>Serratia sp</i>. supplemented with various concentrations of CaCl<sub>2</sub> and glycerol 2-phosphate (G2P) provided at pH 7.0 or 8.6. Presence of hydroxyapatite (HA) was confirmed in the samples by X-ray powder diffraction analysis. When provided with limiting (1 mM) G2P and excess (5 mM) Ca<sup>2+</sup> at pH 8.6, monohydrocalcite was found. This, and bio-HA with less (1 mM) Ca<sup>2+</sup> accumulated Eu(III) to ∼31% and 20% of the biomineral mass, respectively, as compared to 50% of the mineral mass accumulated by commercial HA. Optimally, with bio-HA made at initial pH 7.0 from 2 mM Ca<sup>2+</sup> and 5 mM G2P, Eu(III) accumulated to ∼74% of the weight of bio-HA, which was equal to the mass of the HA mineral component of the biomaterial. The implications with respect to potential bio-HA-barrier development in situ or as a remediation strategy are discussed.</p

    A Cloud Based Android System for Reporting Crimes Against Child Sexual Abuse

    Get PDF
    A cloud based android system for reporting crimes against child sexual abuse is a real time cloud-based system to be used by people to report crimes concerning sexual abuse of children to relevant organizations. Usually, when crimes of this kind happen, the victims or witnesses go to the police, or call related organizations to report crimes. The crimes are then processed through a paper-based system where the cases are recorded and them handled accordingly. This approach is usually slow and in sometimes reads to dissatisfactions to the victims and relatives. With the wide spread of android phones, android system to report the crimes would make the crime management easier and faster as the crimes will be reported in real time using an android phone. Management of the cases will also be fast as the crimes will be directly reported to a cloud database which will make crime tracing and management faster. A global positioning system which is already implemented in all android phones will be used to track the location of the person reporting the crime. Firebase real time database will be used to store the data reported. All the users of the system will be authenticated to make sure they are not eligible to use the application and for privacy of user information. Thus, a cloud based android system will be beneficial to both the public and the acting organizations and there by improve measures to reduce sex crimes against children

    Business Intelligence Modeling in Launch Operations

    Get PDF
    This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems. The future of business intelligence of space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems
    • …
    corecore