31 research outputs found

    Variations In Semen Sample Parameters Among Men In A Fertility Clinic: Implications For Reproducibility In Epidemiologic Studies

    Get PDF
    BACKGROUND: In population studies, one semen sample is usually collected per individual but in clinical settings it is recommended that multiple semen samples are collected per individual for analysis. The goal of this study is to estimate the size of within-person variability in semen quality parameters with the ultimate goal of figuring out how many repeat samples are needed in a semen quality study to represent this. We will also investigate how accurately one can predict semen parameter values for an individual using the long-term average as a standard. HYPOTHESIS: We hypothesize that a maximum of 2 semen samples has enough reliability to allow us to characterize an individual as fertile or infertile in a clinical or research setting. METHOD: This study consists of 287 men who provided a total of 654 semen samples, (range 1 to 9). Semen samples were collected over a period of about 2 years. Within-person and between-person variability was analyzed using semen parameters: sperm concentration, total sperm count, ejaculate volume, sperm morphology (% normal) and motility (% motile). PRELIMINARY RESULTS: There were no significant differences in demographics or reproductive history according to the number of samples collected. Semen sample variation between individuals is substantial but variation within individuals ranged from 14% to 28%. Intraclass correlation values ranged from 0.72 to 0.86 signifying high reproducibilty of semen parameter values. Correlation did not diminish with time. First samples given by each individual was highly similar to their long-term within-person average. CONCLUSIONS: Based on the results of this study, there is high reproducibility of semen parameter values and so 1 sample can provide a true representation of an individual\u27s long-term average

    Quantitative analysis of dynamic safety-critical systems using temporal fault trees

    Get PDF
    Emerging technological systems present complexities that pose new risks and hazards. Some of these systems, called safety-critical systems, can have very disastrous effects on human life and the environment if they fail. For this reason, such systems may feature multiple modes of operation, which may make use of redundant components, parallel architectures, and the ability to fall back to a degraded state of operation without failing completely. However, the introduction of such features poses new challenges for systems analysts, who need to understand how such systems behave and estimate how reliable and safe they really are.Fault Trees Analysis (FTA) is a technique widely accepted and employed for analysing the reliability of safety-critical systems. With FTA, analysts can perform both qualitative and quantitative analyses on safety-critical systems. Unfortunately, traditional FTA is unable to efficiently capture some of the dynamic features of modern systems. This problem is not new; various efforts have been made to develop techniques to solve it. Pandora is one such technique to enhance FTA. It uses new 'temporal' logic gates, in addition to some existing ones, to model dynamic sequences of events and eventually produce combinations of basic events necessary and sufficient to cause a system failure. Until now, Pandora was not able to quantitatively evaluate the probability of a system failure. This is the motivation for this thesis.This thesis proposes and evaluates various techniques for the probabilistic evaluation of the temporal gates in Pandora, enabling quantitative temporal fault tree analysis. It also introduces a new logical gate called the 'parameterised Simultaneous-AND' (pSAND) gate. The proposed techniques include both analytical and simulation-based approaches. The analytical solution supports only component failures with exponential distribution whilst the simulation approach is not restricted to any specific component failure distribution. Other techniques for evaluating higher order component combinations, which are results of the propagation of individual gates towards a system failure, have also been formulated. These mathematical expressions for the evaluation of individual gates and combinations of components have enabled the evaluation of a total system failure and importance measures, which are of great interest to system analysts

    Quantification of temporal fault trees based on fuzzy set theory

    Get PDF
    © Springer International Publishing Switzerland 2014. Fault tree analysis (FTA) has been modified in different ways to make it capable of performing quantitative and qualitative safety analysis with temporal gates, thereby overcoming its limitation in capturing sequential failure behaviour. However, for many systems, it is often very difficult to have exact failure rates of components due to increased complexity of systems, scarcity of necessary statistical data etc. To overcome this problem, this paper presents a methodology based on fuzzy set theory to quantify temporal fault trees. This makes the imprecision in available failure data more explicit and helps to obtain a range of most probable values for the top event probability

    Quantification of Simultaneous-AND Gates in Temporal Fault Trees

    Get PDF
    Fault Tree Analysis has been a cornerstone of safety-critical systems for many years. It has seen various extensions to enable it to analyse dynamic behaviours exhibited by modern systems with redundant components. However, none of these extended FTA approaches provide much support for modelling situations where events have to be "nearly simultaneous", i.e., where events must occur within a certain interval to cause a failure. Although one such extension, Pandora, is unique in providing a "Simultaneous-AND" gate, it does not allow such intervals to be represented. In this work, we extend the Simultaneous-AND gate to include a parameterized interval - referred to as pSAND - such that the output event occurs if the input events occur within a defined period of time. This work then derives an expression for the exact quantification of pSAND for exponentially distributed events and provides an approximation using Monte Carlo simulation which can be used for other distributions

    Quantification of feedstock characteristics before and during hydrothermal liquefaction under subcritical conditions of water

    Get PDF
    Hydrothermal liquefaction (HTL) is a thermal process that converts organics in biomass and waste into renewable crude-like oil. Sewage sludge is a typical waste material that can produce crude-like oil via HTL under subcritical conditions due to its organic-rich content. The composition and properties of sewage sludge significantly influence the feedstock's processability, the conversion of organics, and the crude-like oil yields. HTL feedstock properties, specifically viscosity and density, are important flow properties that affect HTL product formation at different reaction conditions. Knowledge of the feedstock viscosity helps estimate slurry transportation through pipelines, pumping power and heat transfer requirements for design purposes. To determine the flow properties before HTL for pipeline specification, the settling characteristics of biosolid slurry was determined using batch settling experiments. Results of the settling test were used to assess the stability of the slurry during transportation. The effect of solid concentration and particle size of biosolids on slurry stability and pumpability was evaluated. Stability of biosolid slurries improved with an increase in solid concentration. The rheological properties of sewage sludge obtained from different parts of the wastewater treatment plant were also estimated. Generally, sludge slurries were determined to be non-Newtonian fluids. Rheological parameters of sludge feedstock, including yield stress, flow behaviour and consistency indices, were obtained from rheological models. A comparative study was made on the rheology and pumping power required for different sludge types to determine the transportability of these slurries based on a plant capacity of 1000 tonnes/year. To determine the flow properties of sewage sludge during HTL process, the real-time viscosity of sewage sludge slurries was quantified using a modified batch reactor. The torque-rotational speed data of the impeller was converted to shear stress-shear rate data to estimate viscosity. The Couette and Metzner-Otto methods were shown to be valid for real-time viscosity measurement under subcritical conditions. Apparent viscosity changes of lipid, proteins and carbohydrate model compounds at different reaction conditions were estimated. Model compounds exhibited unique viscosity profiles based on the mass yields and chemical speciation of HTL products. Apparent viscosity changes in reacting sludge slurries were determined at variable solid concentration, temperature and pressure. Significant differences were observed between the apparent viscosity profiles of sludge slurries and the apparent viscosity profiles of mixtures of model compounds with similar organic compositions. The effects of lignin, inorganics and the dominance of specific macromolecules on the apparent viscosity of sludge were analysed through a comparative study with the apparent viscosity of microalgae and determined to have significant effects on the apparent viscosity profile of sludge slurries. The major contributions from this PhD investigation can be applied to determine the real-time viscosity of fluids and slurries under subcritical conditions. The study on the rheology of sludge is vital for pipeline specification and reactor design purposes. The changes in apparent biomass viscosity can be predicted for design purposes and process monitoring during reactor operations in an HTL plant.Thesis (Ph.D.) -- University of Adelaide, School of Chemical Engineering and Advanced Materials, 202

    Reliability analysis of dynamic systems by translating temporal fault trees into Bayesian networks

    Get PDF
    Classical combinatorial fault trees can be used to assess combinations of failures but are unable to capture sequences of faults, which are important in complex dynamic systems. A number of proposed techniques extend fault tree analysis for dynamic systems. One of such technique, Pandora, introduces temporal gates to capture the sequencing of events and allows qualitative analysis of temporal fault trees. Pandora can be easily integrated in model-based design and analysis techniques. It is, therefore, useful to explore the possible avenues for quantitative analysis of Pandora temporal fault trees, and we identify Bayesian Networks as a possible framework for such analysis. We describe how Pandora fault trees can be translated to Bayesian Networks for dynamic dependability analysis and demonstrate the process on a simplified fuel system model. The conversion facilitates predictive reliability analysis of Pandora fault trees, but also opens the way for post-hoc diagnostic analysis of failures

    Application of the D3H2 Methodology for the Cost-Effective Design of Dependable Systems

    Get PDF
    The use of dedicated components as a means of achieving desirable levels of fault tolerancein a system may result in high costs. A cost effective way of restoring failed functions is to use heterogeneous redundancies: components that, besides performing their primary intended design function, can also restore compatible functions of other components. In this paper, we apply a novel design methodology called D3H2 (aDaptive Dependable Design for systems with Homogeneous and Heterogeneous redundancies) to assist in the systematic identification of heterogeneous redundancies, the design of hardware/software architectures including fault detection and reconfiguration, and the systematic dependability and cost assessments of the system. D3H2 integrates parameter uncertainty and criticality analyses to model inexact failure data in dependability assessment. The application to a railway case study is presented with a focus on analysing different reconfiguration strategies as well as types and levels of redundancies

    Non-Adherence Tree Analysis (NATA) - an adherence improvement framework: a COVID-19 case study

    Get PDF
    Poor medication adherence is a global phenomenon that has received a significant amount of research attention yet remains largely unsolved. Medication non-adherence can blur drug efficacy results in clinical trials, lead to substantial financial losses, increase the risk of relapse and hospitalisation, or lead to death. The most common methods of measuring adherence are post-treatment measures; that is, adherence is usually measured after the treatment has begun. What the authors are proposing in this multidisciplinary study is a new technique for predicting the factors that are likely to cause non-adherence before or during medication treatment, illustrated in the context of potential non-adherence to COVID-19 antiviral medication. Fault Tree Analysis (FTA), allows system analysts to determine how combinations of simple faults of a system can propagate to cause a total system failure. Monte Carlo simulation is a mathematical algorithm that depends heavily on repeated random sampling to predict the behaviour of a system. In this study, the authors propose a new technique called Non-Adherence Tree Analysis (NATA), based on the FTA and Monte Carlo simulation techniques, to improve adherence. Firstly, the non-adherence factors of a medication treatment lifecycle are translated into what is referred to as a Non-Adherence Tree (NAT). Secondly, the NAT is coded into a format that is translated into the GoldSim software for performing dynamic system modelling and analysis using Monte Carlo. Finally, the GoldSim model is simulated and analysed to predict the behaviour of the NAT. NATA is dynamic and able to learn from emerging datasets to improve the accuracy of future predictions. It produces a framework for improving adherence by analysing social and non-social adherence barriers. Novel terminologies and mathematical expressions have been developed and applied to real-world scenarios. The results of the application of NATA using data from six previous studies in relation to antiviral medication demonstrate a predictive model which suggests that the biggest factor that could contribute to non-adherence to a COVID-19 antiviral treatment is a therapy-related factor (the side effects of the medication). This is closely followed by a condition-related factor (asymptomatic nature of the disease) then patient-related factors (forgetfulness and other causes). From the results, it appears that side effects, asymptomatic factors and forgetfulness contribute 32.44%, 22.67% and 18.22% respectively to discontinuation of medication treatment of COVID-19 antiviral medication treatment. With this information, clinicians can implement relevant interventions and measures and allocate resources appropriately to minimise non-adherence

    Osteoporosis Classification Using Texture Features

    Get PDF
    Assessment of osteoporotic disease from the radiograph image is a significant challenge. Texture characteristics when observed from the naked eye for the bone microarchitecture of the osteoporotic and healthy cases are visually very similar making it a challenging classification problem. To extract the discriminative patterns in all the orientations and scales simultaneously in this study we have proposed an approach that is based on a combination of multi resolution Gabor filters and 1D local binary pattern (1DLBP) features. Gabor filter are used due to their advantages in yielding a scale and orientation sensitive analysis whereas LBPs are useful for quantifying microstructural changes in the images. Our experiment show that the proposed method shows good classification results with an overall accuracy of about 72.71% and outperforms the other methods that have been considered in this paper

    A Preliminary Study on Effectiveness of a Standardized Multi-Robot Therapy for Improvement in Collaborative Multi-Human Interaction of Children with ASD

    Get PDF
    This research article presents a preliminary longitudinal study to check the improvement in multi-human communication of children with Autism Spectrum Disorder (ASD) using a standardized multirobot therapy. The research is based on a 3 step framework: 1) Human-Human Interaction, Stage-1 (HHIS1), 2) Human-Robot Interaction, Stage-2 (HRI-S2), and 3) Human-Human Interaction, Stage-3 (HHI-S3). All three stages of the therapy consist of two command sets: 1) Controls commands and 2) Evaluation commands (auditory commands, visual commands, and combination of both). The concept of multiple robots is introduced to help multi-human communication and discourage isolation in ASD children. The joint attention of an ASD child is improved by the robotic therapy in stage 2 considering it as a key parameter for a multi-human communication scenario. The improvement in joint attention results in better command following in a triad multi-human communication scenario in stage 3 as compared to stage 1. The proposed intervention has been tested on 8 ASD subjects with 10 sessions over a period of two and a half months (10 weeks). Each session of human-human interaction (stage 1 and 3) consisted of 14 cues whereas 18 cues were presented by each robot for human-robot interaction (stage 2). The results indicate an overall 86improvement in the social communication skills of ASD children in case of a multi-human scenario. Validation of results and effectiveness of the therapy has been further accomplished through the use of the Childhood Autism Rating Scale (CARS) score
    corecore