559 research outputs found

    Calculations of Radiobiological Treatment Outcome in Rhabdomyosarcoma

    Get PDF
    Thulani Nyathi, Student no: 0413256X, MSc thesis, Physics, Faculty of science. 2006. Supervisor: Prof D van der Merwe.This study aims to calculate tumour control probabilities (TCP) and normal tissue complication probabilities (NTCP) using radiobiological models and correlate these probabilities with clinically observed treatment outcome from follow-up records. These radiobiological calculations were applied retrospectively to thirty-nine paediatric patients who were treated with radiation at Johannesburg Hospital during the period January 1990 to December 2000 and had histologically proven rhabdomyosarcoma. Computer software, BIOPLAN, was used to calculate the TCP and NTCP arising from the dose distribution calculated by the treatment planning system and characterized by dosevolume histograms (DVHs). There was a weak correlation between the calculated TCP and the observed 5-year overall survival status. Furthermore, potential prognostic factors for survival were examined. Statistical analysis was performed using the Cox proportional hazards regression model. The 5-year overall survival rate was 55 %. The findings of this study are a yardstick against which more aggressive radiotherapy fractionation regimes can be compared

    Promoting Effective Teaching in Multi-Grade Teachers

    Get PDF

    Statistical analysis of bioequivalence studies

    Get PDF
    A Research Report submitted to the Faculty of Science in partial fulfilment of the requirements for the degree of Master of Science. 26 October 2016.The cost of healthcare has become generally expensive the world over, of which the greater part of the money is spent buying drugs. In order to reduce the cost of drugs, drug manufacturers came up with the idea of manufacturing generic drugs, which cost less as compared to brand name drugs. The challenge which arose was how safe, effective and efficient the generic drugs are compared to the brand name drugs, if people were to buy them. As a consequence of this challenge, bioequivalence studies evolved, being statistical procedures for comparing whether the generic and brand name drugs are similar in treating patients for various diseases. This study was undertaken to show the existence of bioequivalence in drugs. Bioavailability is considered in generic drugs to ensure that it is more or less the same as that of the original drugs by using statistical tests. The United States of America’s Food and Agricultural Department took a lead in the research on coming up with statistical methods for certifying generic drugs as bioequivalent to brand name drugs. Pharmacokinetic parameters are obtained from blood samples after dosing study subjects with generic and brand name drugs. The design for analysis in this research report will be a 2 2 crossover design. Average, population and individual bioequivalence is checked from pharmacokinetic parameters to ascertain as to whether drugs are bioequivalent or not. Statistical procedures used include confidence intervals, interval hypothesis tests using parametric as well as nonparametric statistical methods. On presenting results to conclude that drugs are bioequivalent or not, in addition to hypothesis tests and confidence intervals, which indicates whether there is a difference or not, effect sizes will also be reported. If ever there is a difference between generic and brand name drugs, effect sizes then quantify the magnitude of the difference. KEY WORDS: bioequivalence, bioavailability, generic (test) drugs, brand name (reference) drugs, average bioequivalence, population bioequivalence, individual bioequivalence, pharmacokinetic parameters, therapeutic window, pharmaceutical equivalence, confidence intervals, hypothesis tests, effect sizes.TG201

    Network analysis of Diagnostic Medical Device Development for Infectious Diseases Prevalent in South Africa

    Get PDF
    Infectious diseases are a major health concern in South Africa and many other developing countries. The local development of medical devices for infectious diseases in such settings, utilizing the local knowledge base, has the potential to improve the accuracy of and access to diagnoses and to lead to the devices being more context-appropriate and affordable. The aim of this project was to examine the landscape of diagnostic medical device development targeting infectious diseases prevalent in South Africa for the period 2000-2016, particularly with regard to collaboration between institutions in different sectors and the contributions of different collaborators. Such knowledge would be beneficial to future technological and policy developments aimed at improving access to diagnostic services and treatment in the South African context. Collaboration across four sectors was considered: university, hospital, industry and science councils and facilities. A bibliometric analysis was performed, and publications documenting medical device development for diagnosis of infectious diseases were extracted. Co-authorship of the journal and conference articles was used as a proxy for collaboration across organisations. Affiliation data extracted from the publications were used to generate a collaboration network. Netdraw, a network visualisation tool, was used to visualize the network, and network metrics such as degree centrality, betweenness centrality and closeness centrality, as well as group density measures, were produced using UCINET software. The collaboration network and the network metrics were used to determine which organisations have collaborated and which collaborators have played the most active and influential roles in diagnostic device development. The university sector was found to make the largest contribution to the development of diagnostic medical devices in South Africa, and also played a key role in transmitting information throughout the network due to its high frequency of connections to other organisations. The most prevalent type of inter-sectoral collaboration was between universities and science councils and facilities, while universities were found to collaborate most amongst themselves with regard to intrasectoral collaboration. Foreign organisations played a prominent role in diagnostic device development between 2012 and 2016. Tuberculosis was the most prevalent infectious disease in diagnostic device development research, and computer-aided detection was a common feature of research on diagnostic device development

    Automated design of genetic programming of classification algorithms.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Pietermaritzburg.Over the past decades, there has been an increase in the use of evolutionary algorithms (EAs) for data mining and knowledge discovery in a wide range of application domains. Data classification, a real-world application problem is one of the areas EAs have been widely applied. Data classification has been extensively researched resulting in the development of a number of EA based classification algorithms. Genetic programming (GP) in particular has been shown to be one of the most effective EAs at inducing classifiers. It is widely accepted that the effectiveness of a parameterised algorithm like GP depends on its configuration. Currently, the design of GP classification algorithms is predominantly performed manually. Manual design follows an iterative trial and error approach which has been shown to be a menial, non-trivial time-consuming task that has a number of vulnerabilities. The research presented in this thesis is part of a large-scale initiative by the machine learning community to automate the design of machine learning techniques. The study investigates the hypothesis that automating the design of GP classification algorithms for data classification can still lead to the induction of effective classifiers. This research proposes using two evolutionary algorithms,namely,ageneticalgorithm(GA)andgrammaticalevolution(GE)toautomatethe design of GP classification algorithms. The proof-by-demonstration research methodology is used in the study to achieve the set out objectives. To that end two systems namely, a genetic algorithm system and a grammatical evolution system were implemented for automating the design of GP classification algorithms. The classification performance of the automated designed GP classifiers, i.e., GA designed GP classifiers and GE designed GP classifiers were compared to manually designed GP classifiers on real-world binary class and multiclass classification problems. The evaluation was performed on multiple domain problems obtained from the UCI machine learning repository and on two specific domains, cybersecurity and financial forecasting. The automated designed classifiers were found to outperform the manually designed GP classifiers on all the problems considered in this study. GP classifiers evolved by GE were found to be suitable for classifying binary classification problems while those evolved by a GA were found to be suitable for multiclass classification problems. Furthermore, the automated design time was found to be less than manual design time. Fitness landscape analysis of the design spaces searched by a GA and GE were carried out on all the class of problems considered in this study. Grammatical evolution found the search to be smoother on binary classification problems while the GA found multiclass problems to be less rugged than binary class problems

    Analysis of operational risk in the South African banking sector using the standardised measurement approach

    Get PDF
    Abstract : Over the last decade, financial markets across the world have been devastated by operational risk-related incidents. These incidents were caused by a number of aspects, such as, inter alia, fraud, improper business practices, natural disasters, and technology failures. As new losses are incurred, they become part of each financial institution’s internal loss database. The inclusion of these losses has caused notable upward spikes in the operational risk Pillar I regulatory capital charge for financial institutions across the board. The inherent imperfections in people, processes, and systems–be it by intention or oversight–are exposures that cannot be entirely eliminated from bank operations. Thus, the South African Reserve Bank mandates South African financial institutions to reserve capital to cover their idiosyncratic operational risk exposures. Investors fund capital reserves that are held by financial institutions, and these stakeholders demand a viable return on their investment. Consequently, the risk exposure and capital held relationship should be fully understood, managed, and optimised. This thesis extends Sundmacher (2007)’s work through the use of one instance of the Standardised Measurement Approach data against that of the Advanced Measurement Approach, the Standardised Approach, and the Basic Indicator Approach to estimate the potential financial benefit that financial institutions in South Africa could attain or lose, should they move from a Basic Indicator Approach to a Standardised Approach, or from a Standardised Approach to an Advanced Measurement Approach, or from an Advanced Measurement Approach to a Standardised Measurement Approach. The Advanced Measurement Approach, a Loss Distribution Approach coupled with a Monte Carlo simulation was used. Parametric models were imposed to generate the annual loss distribution through the convolution of the annual loss severity and frequency distribution. To fit the internal loss data for each class, the mean annual number of losses was calculated and was assumed to follow a Poisson distribution. The Maximum Likelihood Estimator was used to fit four severity distributions: Lognormal;Weibull; Generalized Pareto; and Burr distributions. To determine the goodness of fit, the Kolmogorov-Smirnov Test at a 5% level of significance was used. To select the best fitting distribution, the Akaike Information Criterion was used. Robustness and stability tests where then performed, using bootstrapping and stress-testing respectively. Overall, we find that the Basel Committee on Banking Supervision’s primary consideration that postulates that there is value in a financial institution moving from the Basic Indicator Approach to the Standardised Approach, or from the Standardised Approach to the Advanced Measurement Approach is indeed valid, but fails in the movement from an Advanced Measurement Approach to a Standardised Measurement Approach. The best Pillar I Capital reprieve is offered by the Diversified Advanced Measurement Approach, whilst the second best is the Standardised Measurement Approach based on an average total loss threshold of €100k (0.87% higher than the Diversified Advanced Measurement Approach), closely followed by the default Standardised Measurement Approach based on average total loss threshold of €20k (5.63% higher than the Diversified Advanced Measurement Approach). To the best of our abilities, we could not find any work that is comprehensive enough to include all four available operational risk quantification approaches (Basic Indicator Approach, Standardised Approach, Advanced Measurement Approach, and Standardised Measurement Approach), for the South African market in particular. This work foresees South African financial institutions pushing back on the implementation of SMA, and potentially lobbying the regulator to remain in AMA – as the alternative might mean increased capital requirements leading to reduced Economic Value Added to shareholders (as more capital is required at the same level of profitability or business activity). The financial institutions are anticipated to sight advanced modelling techniques as helping management have a deeper understanding of their exposures – whilst the Scenario Analysis process allows them a method of identifying their key risks and quantifying them (adding to management’s tools set). However, if South African financial institutions want to compete at a global stage and wanted to be accepted among ‘internationally active’ institutions – their adoption of SMA may not be a choice but an obligation and an entry ticket to the game (global trade).M.Com. (Financial Economics

    A cost benefit analysis of operational risk quantification methods for regulatory capital

    Get PDF
    Operational risk has attracted a sizeable amount of attention in recent years as a result of massive operational losses that headlined financial markets across the world. The operational risk losses have been on the back of litigation cases and regulatory fines, some of which originated from the 2008 global financial crisis. As a result it is compulsory for financial institutions to reserve capital for the operational risk exposures inherent in their business activities. Local financial institutions are free to use any of the following operational risk capital estimation methods: Advanced Measurement Approach (AMA), the Standardized (TSA) and/ the Basic Indicator Approach (BIA). The BIA and TSA are predetermined by the Reserve Bank, whilst AMA relies on internally generated methodologies. Estimation approaches employed in this study were initially introduced by the BCBS, largely premised on an increasingly sophisticated technique to incentivise banks to continually advance their management and measurement methods while benefiting from a lower capital charge through gradating from the least to the most sophisticated measurement tool. However, in contrast to BCBS's premise, Sundmacher (2007), whilst using a hypothetical example, finds that depending on a financial institution's distribution of its Gross Income, the incentive to move from BIA to TSA is nonexistent or marginal at best. In this thesis I extend Sundmacher (2007)'s work, and I test one instance of AMA regulatory capital (RegCap) against that of TSA in a bid to crystalise the rand benefit that financial institutions stand to attain (if at all) should they move from TSA to AMA. A Loss Distribution Approach (LDA), coupled with a Monte Carlo simulation, were used in modelling AMA. In modelling the loss severities, the Lognormal, Weibull, Burr, Generalized Pareto, Pareto and Gamma distributions were considered, whilst the Poisson distribution was used for modelling operational loss frequency. The Kolmogorov-Smirnov and Akaike information criterion tests were respectively used for assessing the level of distribution fit and for model selection. The robustness and stability of the model were gauged using stress testing and bootstrap. The TSA modelling design involved using predetermined beta values for different business lines specified by the BCBS. The findings show that the Lognormal and Burr distributions best describes the empirical data. Additionally, there is a substantial incentive in terms of the rand benefit of migrating from TSA to AMA in estimating operational risk capital. The initial benefit could be directed towards changes in information technology systems in order to effect the change from TSA to AMA. Notwithstanding that the data set used in this thesis is restricted to just one of the "big four banks" (owing to proprietary restrictions), the methodology is representable (or generalisable) to the other big banks within South Africa. The scope of this study can further be extended to cover Extreme Value Theory, Non-Parametric Empirical Sampling, Markov Chain Monte Carlo, and Bayesian Approaches in estimating operational risk capital

    SINGLE AVIATION MARKETS AND CONTESTABILITY THEORY: GETTING THE POLICY BEARINGS RIGHT

    Get PDF
    This paper looks at the theory of contestable markets as it relates to the aviation industry, particularly its contribution to the deregulation debate and subsequent extension to the evaluation of welfare benefits in single aviation markets. The conclusion reached is that the theory has limited usefulness as far as policy formulation in aviation markets is concerned

    Supported Cobalt Oxide Catalysts for the Preferential Oxidation of Carbon Monoxide: An in situ Investigation

    Get PDF
    The study presented in this thesis has placed great focus on Co3O4-based catalysts for producing CO-free H2-rich gas streams for power generation using proton-exchange or polymer electrolyte membrane fuel cells (PEMFCs). The removal of CO (0.5 – 2%) is essential as it negatively affects the performance of the Pt-based anode catalyst of PEMFCs. Among the various CO removal processes reported, the preferential oxidation of CO (CO-PrOx) to CO2 is a very attractive catalytic process for decreasing the CO content to acceptable levels (i.e.,< 10 ppm) for operating the PEMFC. Co3O4-based catalysts have shown very good catalytic activity for the total oxidation of CO in the absence of H2, H2O and CO2. More specifically, the performance of Co3O4 is known to be influenced by numerous factors such as particle size, particle shape, and the preparation method. As a result, there has also been growing interest in Co3O4 as a cheaper alternative to noble metals for the CO-PrOx reaction. However, the H2 (40 – 75%) in the CO-PrOx feed can also react with O2 (0.5 – 4%) to produce H2O, which consequently decreases the selectivity towards CO2 (based on the total O2 conversion). Aside from H2, the CO-PrOx feed also contains H2O and CO2 which may affect the CO oxidation process as well. The use of Co3O4 as the active catalyst for CO-PrOx can have shortcomings – the main one being its relatively high susceptibility to reduction by H2, forming less active and selective Co-based phases (viz., CoO and metallic Co). Particularly over metallic Co, the conversion pathway of CO can change from oxidation to hydrogenation, forming CH4 instead of CO2. Therefore, the first objective of the work carried out was to investigate the effect of the gas feed components (viz., H2, H2O and CO2; co-fed individually and simultaneously) on the progress of the CO oxidation reaction and the phase stability of Co3O4 over a wide temperature range (50 – 450 °C). It should be noted that the presence of these three gases can also introduce more side reactions, viz., the forward and reverse water-gas shift, respectively, as well as CO and CO2 methanation, respectively. In the supported state, the choice of support, as well as the nature and/or strength of the interaction between the Co3O4 nanoparticles and the support can influence catalytic performance and phase stability. CO oxidation over metal oxides such as Co3O4, is believed to proceed via the Mars-van Krevelen mechanism, which depends on the surface of the catalyst being reducible in order to release lattice oxygen species. Generally, strong metal-support interactions (MSIs) or nanoparticlesupport interactions (NPSIs) can hinder the removal of surface (and bulk) oxygen species, which can negatively affect the catalytic performance. Strong interactions can also promote the solidstate reaction between the species from the nanoparticle with those from the support, leading to the formation of metal-support compounds (MSCs). The supports SiO2, TiO2 and Al2O3 are well known for this phenomenon, and consequently, allow for the formation of silicates, titanates and aluminates, respectively. Support materials such as CeO2, ZrO2 and SiC, are not known for interacting strongly with nanoparticles and often do not react to form MSCs. Therefore, the second objective of this Ph.D. study was to investigate the effect of different support materials (viz., CeO2, ZrO2, SiC, SiO2, TiO2 and Al2O3) on the catalytic performance and phase stability of Co3O4 under different CO-PrOx reaction gas environments. Before carrying out the lab-based experiments, theoretical evaluations were performed by means of thermodynamic calculations based on the Gibbs-Helmholtz Equation. The calculations helped determine the equilibrium conversions of each gas-phase reaction, revealing the extent to which a certain reaction can be expected to take place between 0 and 500 °C. Thermodynamic calculations were also performed to predict the stability of Co3O4, CoO and metallic Co at different temperatures and partial pressure ratios of H2-to-H2O. In the case of supported nanoparticles, the formation of the Co-support compounds - Co2SiO4, CoTiO3 and CoAl2O4 from SiO2, TiO2 and Al2O3, respectively - was shown to be thermodynamically feasible in H2-H2O mixtures. Unsupported Co3O4 nanoparticles were synthesised using the reverse microemulsion technique, while supported Co3O4 nanoparticles were prepared using incipient wetness impregnation. In situ PXRD- and magnetometry-based CO-PrOx catalytic testing was performed in different gas environments as depicted in Figure S.1. The different conditions chosen allowed for the effect of H2, H2O and CO2 on the progress of the CO oxidation reaction and on the reducibility of Co3O4 to be studied. For the first time, this work has identified all the possible gas-phase side reactions (in addition to CO oxidation) that can take place under CO-PrOx conditions. Each reaction could be linked to a specific Co-based phase which is responsible for its occurrence. Furthermore, the temperatures and the extent to which these reactions take place were in-line with the predictions from the thermodynamic calculations. The presence of a support does stabilise the Co3O4 (and CoO) phase over a wide temperature range. Over the weakly-interacting supports (i.e., ZrO2 and SiC), high CO conversions (91.5% and 80.8%, respectively) and O2 selectivities (55.2% and 55.9%, respectively) to CO2 could be obtained, in addition to the improved phase stability of Co3O4. In agreement with the thermodynamic predictions, the presence of Co2SiO4 (7.7%), CoTiO3 (13.8% (from TiO2- anatase) and 8.9% (from TiO2-rutile)), and CoAl2O4 (26.6%) was confirmed using ex situ X-Ray Absorption Spectroscopy in the spent samples of Co3O4/SiO2, Co3O4/TiO2-anatase, Co3O4/TiO2- rutile and Co3O4/Al2O3, respectively, after CO-PrOx. These three samples also exhibited relatively low CO oxidation activities and selectivities, as well as low Co3O4 reducibility
    • …
    corecore