235 research outputs found

    Microstructure and hardness of WC-Co particle reinforced iron matrix surface composite

    Get PDF
    In this study, a high Cr cast iron surface composite material reinforced with WC-Co particles 2-6 mm in size was prepared using a pressureless sand mold infiltration casting technique. The composition, microstructure and hardness were determined by means of energy dispersive spectrometry (EDS), electron probe microanalysis (EPMA), scanning electron microscope (SEM) and Rockwell hardness measurements. It is determined that the obtained composite layer is about 15 mm thick with a WC-Co particle volumetric fraction of ~38%. During solidification, interface reaction takes place between WC-Co particles and high chromium cast iron. Melting and dissolving of prefabricated particles are also found, suggesting that local Co melting and diffusion play an important role in promoting interface metallurgical bonding. The composite layer is composed of ferrite and a series of carbides, such as (Cr, W, Fe)23C6, WC, W2C, M6C and M12C. The inhomogeneous hardness in the obtained composite material shows a gradient decrease from the particle reinforced metal matrix composite layer to the matrix layer. The maximum hardness of 86.3 HRA (69.5 HRC) is obtained on the particle reinforced surface, strongly indicating that the composite can be used as wear resistant material

    An Accelerated Stochastic ADMM for Nonconvex and Nonsmooth Finite-Sum Optimization

    Full text link
    The nonconvex and nonsmooth finite-sum optimization problem with linear constraint has attracted much attention in the fields of artificial intelligence, computer, and mathematics, due to its wide applications in machine learning and the lack of efficient algorithms with convincing convergence theories. A popular approach to solve it is the stochastic Alternating Direction Method of Multipliers (ADMM), but most stochastic ADMM-type methods focus on convex models. In addition, the variance reduction (VR) and acceleration techniques are useful tools in the development of stochastic methods due to their simplicity and practicability in providing acceleration characteristics of various machine learning models. However, it remains unclear whether accelerated SVRG-ADMM algorithm (ASVRG-ADMM), which extends SVRG-ADMM by incorporating momentum techniques, exhibits a comparable acceleration characteristic or convergence rate in the nonconvex setting. To fill this gap, we consider a general nonconvex nonsmooth optimization problem and study the convergence of ASVRG-ADMM. By utilizing a well-defined potential energy function, we establish its sublinear convergence rate O(1/T)O(1/T), where TT denotes the iteration number. Furthermore, under the additional Kurdyka-Lojasiewicz (KL) property which is less stringent than the frequently used conditions for showcasing linear convergence rates, such as strong convexity, we show that the ASVRG-ADMM sequence has a finite length and converges to a stationary solution with a linear convergence rate. Several experiments on solving the graph-guided fused lasso problem and regularized logistic regression problem validate that the proposed ASVRG-ADMM performs better than the state-of-the-art methods.Comment: 40 Pages, 8 figure

    A hierarchical decision-making framework for the assessment of the prediction capability of prognostic methods

    Get PDF
    In prognostics and health management, the prediction capability of a prognostic method refers to its ability to provide trustable predictions of the remaining useful life, with the quality characteristics required by the related maintenance decision making. The prediction capability heavily influences the decision makers' attitude toward taking the risk of using the predicted remaining useful life to inform the maintenance decisions. In this article, a four-layer, top-down, hierarchical decision-making framework is proposed to assess the prediction capability of prognostic methods. In the framework, prediction capability is broken down into two criteria (Layer 2), six sub-criteria (Layer 3) and 19 basic sub-criteria (Layer 4). Based on the hierarchical framework, a bottom-up, quantitative approach is developed for the assessment of the prediction capability, using the information and data collected at the Layer-4 basic sub-criteria level. Analytical hierarchical process is applied for the evaluation and aggregation of the sub-criteria and support vector machine is applied to develop a classification-based approach for prediction capability assessment. The framework and quantitative approach are applied on a simulated case study to assess the prediction capabilities of three prognostic methods of the literature: fuzzy similarity, feed-forward neural network and hidden semi-Markov model. The results show the feasibility of the practical application of the framework and its quantitative assessment approach, and that the assessed prediction capability can be used to support the selection of the suitable prognostic method for a given application

    An extended method for evaluating assumptions deviations in quantitative risk assessment and its application to external flooding risk assessment of a nuclear power plant

    Get PDF
    In quantitative risk assessment, assumptions are typically made, based on best judgement, conservative, or (sometimes) optimistic judgments. Best judgment and optimistic assumptions may result in failing to meet the quantitative safety objectives, whereas conservative assumptions may increase the margins which the objectives are met with but result in cost-ineffective design or operation. In the present paper, we develop an extended framework for the analysis of the criticality of assumptions in risk assessment by evaluating the risk that deviations from the assumptions lead to a reduction of the safety margins. The framework aims to support risk-informed decision making by identifying important assumptions and integrating the assessment of their criticality into the quantitative risk assessment (QRA). The framework is, finally applied within the quantitative risk assessment of a Nuclear Power Plant (NPP) exposed to external flooding. Compared to previous works on the subject, we consider also conservative assumptions and introduce decision flow diagrams to support the classification of the criticality of the assumptions. The framework provides a more comprehensive and transparent evaluation of the assumptions deviation risk through the decision flow diagrams that facilitate the standardization of the evaluation of the assumption deviation effects on the risk assessment.acceptedVersio

    National survey on intra-laboratory turnaround time for some most common routine and stat laboratory analyses in 479 laboratories in China

    Get PDF
    Introduction: To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods: 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results: The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions: Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them

    The Production and Characteristics Test of Synthetic Rice Made of Maize Flour

    Full text link
    Synthetic rice made of maize flour has a great opportunity to be developed as a staple food. Peopleused to consume synthetic rice, but only limited studies reported about the preferred characteristic of synthetic rice. The purpose of this study is 1) to produce and examine the characteristic of synthetic rice including moisture content, particle size, storage time and steam duration, and 2) toobtaine the preferred sensory level of synthetic rice based on aroma, texture, flavor and color.The procedure was startedbymakingthe maize flourto produce synthetic rice using a granulator machine. The granules was then steamed and dried under the sun light. Seventype of synthetic rice was used in this research, namely pure maize rice (100% maize flour), three mixed synthetic rice of maize flour and wheat flour, and three mixed synthetic riceofmaize flour and tapioca flour withthree different ratio 95:5, 85:75, and 75:25.). The results showed that the water content of synthetic rice was measured between 10.37 to 13.79%. While the steaming timewas reached around 46 to 68 minutes. The rice wasable to be stored about 24-26 hour. The organoleptic testsshowed that the most favorite synthetic rice was a mixture maize rice of 95% maize flour and 5% of tapioca flour for all level preference of the sensory test

    Modeling dependent competing failure processes with degradation-shock dependence

    Get PDF
    In this paper, we develop a new reliability model for dependent competing failure processes (DCFPs), which accounts for degradation-shock dependence. This is a type of dependence where random shock processes are influenced by degradation processes. The degradation-shock dependence is modeled by assuming that the intensity function of the nonhomogeneous Poisson process describing the random shock processes is dependent on the degradation processes. The dependence effect is modeled with reference to a classification of the random shocks in three “zones” according to their magnitudes, damage zone, fatal zone, and safety zone, with different effects on the system's failure behavior. To the best of the authors’ knowledge, this type of dependence has not yet been considered in reliability models. Monte Carlo simulation is used to calculate the system reliability. A realistic application is presented with regards to the dependent failure behavior of a sliding spool, which is subject to two dependent competing failure processes, wear and clamping stagnation. It is shown that the developed model is capable of describing the dependent competing failure behaviors and their dependence
    • …
    corecore