98,562 research outputs found

    A closer look at ARSA activity in a patient with metachromatic leukodystrophy.

    Get PDF
    Metachromatic leukodystrophy (MLD) is an autosomal recessive lysosomal storage disease mainly caused by a deficiency of arylsulfatase A activity. The typical clinical course of patients with the late infantile form includes a regression in motor skills with progression to dysphagia, seizures, hypotonia and death. We present a case of a 4-year-old female with rapidly progressive developmental regression with loss of motor milestones, spasticity and dysphagia. MRI showed volume loss and markedly abnormal deep white matter. Enzymatic testing in one laboratory showed arylsulfatase A activity in their normal range. However, extraction of urine showed a large increase in sulfatide excretion in a second laboratory. Measurement of arylsulfatase A in that laboratory showed a partial decrease in arylsulfatase A activity measured under typical conditions (about 37% of the normal mean). When the concentration of substrate in the assay was lowered to one quarter of that normally used, this individual had activity \u3c10% of controls. The patient was found to be homozygous for an unusual missense mutation in the arylsulfatase A gene confirming the diagnosis of MLD. This case illustrates the importance of careful biochemical and molecular testing for MLD if there is suspicion of this diagnosis

    On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques

    Get PDF
    Regression testing is an important activity in the software life cycle, but it can also be very expensive. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of test case prioritization techniques is to increase a test suite’s rate of fault detection (how quickly, in a run of its test cases, that test suite can detect faults). Previous work has shown that prioritization can improve a test suite’s rate of fault detection, but the assessment of prioritization techniques has been limited primarily to hand-seeded faults, largely due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults and that the use of hand-seeded faults can be problematic for the validity of empirical results focusing on fault detection. We have therefore designed and performed two controlled experiments assessing the ability of prioritization techniques to improve the rate of fault detection of test case prioritization techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. More importantly, a comparison of our results with those collected using hand-seeded faults reveals several implications for researchers performing empirical studies of test case prioritization techniques in particular and testing techniques in general

    Improved detection of Probe Request Attacks : Using Neural Networks and Genetic Algorithm

    Get PDF
    The Media Access Control (MAC) layer of the wireless protocol, Institute of Electrical and Electronics Engineers (IEEE) 802.11, is based on the exchange of request and response messages. Probe Request Flooding Attacks (PRFA) are devised based on this design flaw to reduce network performance or prevent legitimate users from accessing network resources. The vulnerability is amplified due to clear beacon, probe request and probe response frames. The research is to detect PRFA of Wireless Local Area Networks (WLAN) using a Supervised Feedforward Neural Network (NN). The NN converged outstandingly with train, valid, test sample percentages 70, 15, 15 and hidden neurons 20. The effectiveness of an Intruder Detection System depends on its prediction accuracy. This paper presents optimisation of the NN using Genetic Algorithms (GA). GAs sought to maximise the performance of the model based on Linear Regression (R) and generated R > 0.95. Novelty of this research lies in the fact that the NN accepts user and attacker training data captured separately. Hence, security administrators do not have to perform the painstaking task of manually identifying individual frames for labelling prior training. The GA provides a reliable NN model and recognises the behaviour of the NN for diverse configurations

    HYBRID DATA APPROACH FOR SELECTING EFFECTIVE TEST CASES DURING THE REGRESSION TESTING

    Get PDF
    In the software industry, software testing becomes more important in the entire software development life cycle. Software testing is one of the fundamental components of software quality assurances. Software Testing Life Cycle (STLC)is a process involved in testing the complete software, which includes Regression Testing, Unit Testing, Smoke Testing, Integration Testing, Interface Testing, System Testing & etc. In the STLC of Regression testing, test case selection is one of the most important concerns for effective testing as well as cost of the testing process. During the Regression testing, executing all the test cases from existing test suite is not possible because that takes more time to test the modified software. This paper proposes new Hybrid approach that consists of modified Greedy approach for handling the test case selection and Genetic Algorithm uses effective parameter like Initial Population, Fitness Value, Test Case Combination, Test Case Crossover and Test Case Mutation for optimizing the tied test suite. By doing this, effective test cases are selected and minimized the tied test suite to reduce the cost of the testing process. Finally the result of proposed approach compared with conventional greedy approach and proved that our approach is more effective than other existing approach

    Predicting drug response of tumors from integrated genomic profiles by deep neural networks

    Full text link
    The study of high-throughput genomic profiles from a pharmacogenomics viewpoint has provided unprecedented insights into the oncogenic features modulating drug response. A recent screening of ~1,000 cancer cell lines to a collection of anti-cancer drugs illuminated the link between genotypes and vulnerability. However, due to essential differences between cell lines and tumors, the translation into predicting drug response in tumors remains challenging. Here we proposed a DNN model to predict drug response based on mutation and expression profiles of a cancer cell or a tumor. The model contains a mutation and an expression encoders pre-trained using a large pan-cancer dataset to abstract core representations of high-dimension data, followed by a drug response predictor network. Given a pair of mutation and expression profiles, the model predicts IC50 values of 265 drugs. We trained and tested the model on a dataset of 622 cancer cell lines and achieved an overall prediction performance of mean squared error at 1.96 (log-scale IC50 values). The performance was superior in prediction error or stability than two classical methods and four analog DNNs of our model. We then applied the model to predict drug response of 9,059 tumors of 33 cancer types. The model predicted both known, including EGFR inhibitors in non-small cell lung cancer and tamoxifen in ER+ breast cancer, and novel drug targets. The comprehensive analysis further revealed the molecular mechanisms underlying the resistance to a chemotherapeutic drug docetaxel in a pan-cancer setting and the anti-cancer potential of a novel agent, CX-5461, in treating gliomas and hematopoietic malignancies. Overall, our model and findings improve the prediction of drug response and the identification of novel therapeutic options.Comment: Accepted for presentation in the International Conference on Intelligent Biology and Medicine (ICIBM 2018) at Los Angeles, CA, USA. Currently under consideration for publication in a Supplement Issue of BMC Genomic

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    A Model to Estimate First-Order Mutation Coverage from Higher-Order Mutation Coverage

    Full text link
    The test suite is essential for fault detection during software development. First-order mutation coverage is an accurate metric to quantify the quality of the test suite. However, it is computationally expensive. Hence, the adoption of this metric is limited. In this study, we address this issue by proposing a realistic model able to estimate first-order mutation coverage using only higher-order mutation coverage. Our study shows how the estimation evolves along with the order of mutation. We validate the model with an empirical study based on 17 open-source projects.Comment: 2016 IEEE International Conference on Software Quality, Reliability, and Security. 9 page
    • …
    corecore