445 research outputs found

    Fireside corrosion degradation of ferritic alloys at 600°C in oxy-fired conditions

    Get PDF
    This paper reports the results of a study carried out to investigate the effects of simulated coal/biomass combustion conditions on the fireside corrosion. The 1000 h deposit recoat exposure (5 × 200 h cycles) was carried out at 600 °C. In these tests ferritic alloys were used 15Mo3, T22, T23 and T91. Kinetics data were generated for the alloys exposed using both traditional weight change methods and metal loss measurements. The highest rate of corrosion based on EDX results occurred under D1 deposit where provoke mainly by the formation of alkali iron tri-sulphate phase

    Challenges of implementing management accounting innovations

    Get PDF
    This study investigates the challenges of implementing innovative management accounting techniques, referred to as strategic management accounting (SMA), the interrelationship among the challenges and the impact of the challenges on SMA usage intensity. From the analysis of survey data obtained from listed manufacturing companies in Nigeria, the result supports the conclusion that SMA implementation challenges are interrelated. However, lack of top management support and low awareness/lack of knowledge are contributory to most of the implementation challenges. The challenges discouraging the intensive use of SMA are the perception that SMA implementation is unnecessary as strategy issues are already integrated in other functions within the organization, high implementation cost and problems relating to information flow between departments within the organization. The current study contributes to knowledge in the sense that it is the first (to the researchers’ knowledge) to examine specifically the interrelationship among SMA implementation challenges in the Nigerian context, thereby drawing attention to the need to consider the challenges to embracing management accounting innovations holistically. Knowledge of SMA implementation challenges could help explain the low adoption rate of SMA in developing countries. Such knowledge might be helpful in providing a robust response to the challenges of implementing management accounting innovations

    Multiscale Damage Modelling of Notched and Un-Notched 3D Woven Composites With Randomly Distributed Manufacturing Defects

    Get PDF
    This work proposes a stochastic multiscale computational framework for damage modelling in 3D woven composite laminates, by considering the random distribution of manufacturing-induced imperfections. The proposed method is demonstrated to be accurate, while being simple to implement and requiring modest computational resources. In this approach, a limited number of cross-sectional views obtained from micro-computed tomography (ÂľCT) are used to obtain the stochastic distribution of two key manufacturing-induced defects, namely waviness and voids. This distribution is fed into a multiscale progressive damage model to predict the damage response of three-dimensional (3D) orthogonal woven composites. The accuracy of the proposed model was demonstrated by performing a series of finite element simulations of the un-notched and notched tensile tests (having two different hole sizes) for resin-infused thermoplastic (EliumÂŽ) 3D woven composites. Excellent correlation was achieved between experiments and the stochastic finite element simulations. This demonstrates the effectiveness of the proposed stochastic multiscale model. The model successfully captured the stochastic nature of tensile responses (ultimate tensile strength and stiffness), damage modes (matrix damage and fibre failure), and initiation and propagation of transverse cracks in thermoplastic 3D woven composites, consistent with experimental observation. The stochastic computational framework presented in this paper can be used to guide the design and optimization of 3D textile composite structures

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Immunological and molecular epidemiological characteristics of acute and fulminant viral hepatitis A

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hepatitis A virus is an infection of liver; it is hyperendemic in vast areas of the world including India. In most cases it causes an acute self limited illness but rarely fulminant. There is growing concern about change in pattern from asymptomatic childhood infection to an increased incidence of symptomatic disease in the adult population.</p> <p>Objective</p> <p>In-depth analysis of immunological, viral quantification and genotype of acute and fulminant hepatitis A virus.</p> <p>Methods</p> <p>Serum samples obtained from 1009 cases of suspected acute viral hepatitis was employed for different biochemical and serological examination. RNA was extracted from blood serum, reverse transcribed into cDNA and amplified using nested PCR for viral quantification, sequencing and genotyping. Immunological cell count from freshly collected whole blood was carried out by fluorescence activated cell sorter.</p> <p>Results</p> <p>Fulminant hepatitis A was mostly detected with other hepatic viruses. CD8<sup>+ </sup>T cells count increases in fulminant hepatitis to a significantly high level (P = 0.005) compared to normal healthy control. The immunological helper/suppressor (CD4<sup>+</sup>/CD8<sup>+</sup>) ratio of fulminant hepatitis was significantly lower compared to acute cases. The serologically positive patients were confirmed by RT-PCR and total of 72 (69.2%) were quantified and sequenced. The average quantitative viral load of fulminant cases was significantly higher (<it>P </it>< 0.05). There was similar genotypic distribution in both acute and fulminant category, with predominance of genotype IIIA (70%) compared to IA (30%).</p> <p>Conclusions</p> <p>Immunological factors in combination with viral load defines the severity of the fulminant hepatitis A. Phylogenetic analysis of acute and fulminant hepatitis A confirmed genotypes IIIA as predominant against IA with no preference of disease severity.</p

    High-temperature performance of ferritic steels in fireside corrosion regimes: temperature and deposits

    Get PDF
    The paper reports high temperature resistance of ferritic steels in fireside corrosion regime in terms of temperature and deposits aggressiveness. Four candidate power plant steels: 15Mo3, T22, T23 and T91 were exposed under simulated air-fired combustion environment for 1000 h. The tests were conducted at 600, 650 and 700 °C according to deposit-recoat test method. Post-exposed samples were examined via dimensional metrology (the main route to quantify metal loss), and mass change data were recorded to perform the study of kinetic behavior at elevated temperatures. Microstructural investigations using ESEM-EDX were performed in order to investigate corrosion degradation and thickness of the scales. The ranking of the steels from most to the least damage was 15Mo3 > T22 > T23 > T91 in all three temperatures. The highest rate of corrosion in all temperatures occurred under the screening deposit

    Multi-watt, multi-octave, mid-infrared femtosecond source

    Get PDF
    Spectroscopy in the wavelength range from 2 to 11 mu m (900 to 5000 cm(-1)) implies a multitude of applications in fundamental physics, chemistry, as well as environmental and life sciences. The related vibrational transitions, which all infrared-active small molecules, the most common functional groups, as well as biomolecules like proteins, lipids, nucleic acids, and carbohydrates exhibit, reveal information about molecular structure and composition. However, light sources and detectors in the mid-infrared have been inferior to those in the visible or near-infrared, in terms of power, bandwidth, and sensitivity, severely limiting the performance of infrared experimental techniques. This article demonstrates the generation of femtosecond radiation with up to 5 W at 4.1 mu m and 1.3 W at 8.5 mu m, corresponding to an order-of-magnitude average power increase for ultrafast light sources operating at wavelengths longer than 5 mu m. The presented concept is based on power-scalable near-infrared lasers emitting at a wavelength near 1 mu m, which pump optical parametric amplifiers. In addition, both wavelength tunability and supercontinuum generation are reported, resulting in spectral coverage from 1.6 to 10.2 mu m with power densities exceeding state-of-the-art synchrotron sources over the entire range. The flexible frequency conversion scheme is highly attractive for both up-conversion and frequency comb spectroscopy, as well as for a variety of time-domain applications

    Health-Related Quality of Life, Subjective Health Complaints, Psychological Distress and Coping in Pakistani Immigrant Women With and Without the Metabolic Syndrome: The InnvaDiab-DEPLAN Study on Pakistani Immigrant Women Living in Oslo, Norway

    Get PDF
    The increasingly high number of immigrants from South-East Asia with The Metabolic Syndrome (MetS) is an important challenge for the public health sector. Impaired glucose is essential in MetS. The blood glucose concentration is not only governed by diet and physical activity, but also by psychological distress which could contribute to the development of MetS. The aim of this study is to describe health-related quality of life, subjective health complaints (SHC), psychological distress, and coping in Pakistani immigrant women, with and without MetS. As a part of an randomized controlled intervention study in Oslo, Norway, female Pakistani immigrants (n = 198) answered questionnaires regarding health related quality of life, SHC, psychological distress, and coping. Blood variables were determined and a standardized oral glucose tolerance test was performed. The participants had a high score on SHC and psychological distress. About 40% of the participants had MetS, and this group showed significantly lower general health, lower physical function, and more bodily pain, than those without MetS. Those with MetS also had more SHC, depressive symptoms, higher levels of somatisation, and scored significantly lower on the coping strategy of active problem solving. Pakistani immigrant women seem to have a high prevalence of SHC and psychological distress, especially those with MetS
    • …
    corecore