320,726 research outputs found
Selection Criteria for the Honors Program in Azerbaijan
Designing effective selection procedures for honors programs is always a challenging task. In Azerbaijan, selection is based on three main criteria: (i) student performance in the centralized university admission test; (ii) student performance in the first year of studies; and (iii) student performance in the honors program selection test. This research identifies criteria most crucial in predicting student success in honors programs. An analysis was first conducted for all honors students. Results indicate that all three criteria used in the selection process are highly significant predictors of student success in the program. This same analysis was then applied separately for each degree program, demonstrating that not all criteria are significant for some programs. These results suggest that creating differentiated selection procedures for different degree programs might be more efficient
Topics In Forward Stepwise Logistic Regression
In this dissertation, five topics related to the process and prediction of forward stepwise logistic regression are investigated.;Forward stepwise logistic regression is involved with selection and stopping criteria. Seven selection criteria are used: the likelihood ratio statistic, Lawless and Singhal (1978)\u27s statistic, the Wald statistic, the score statistic, Peduzzi, Hardy, and Holford (1980)\u27s statistic, Lee and Koval\u27s statistic (LK), and a sweep operator\u27s statistic (SW). Five stopping criteria are used: {dollar}\chi\sp2{dollar} test based on a fixed {dollar}\alpha{dollar} level, minimum value of ERR, minimum value of the C{dollar}\sb{lcub}\rm p{rcub}{dollar} statistic (Hosmer, 1989), minimum value of the Akaike information criterion (Akaike, 1974), and minimum value of Schwarz\u27s criterion (Schwarz, 1978).;Apparent error tate (ARR) tends to underestimate true error rate (ERR). In our study, estimated true error rate (ERR) is obtained by ERR = ARR + {dollar}\\omega{dollar}, where {dollar}\\omega{dollar} is from Efron (1986)\u27s parametric estimate of bias for ARR.;We use Monte Carlo simulation with both multivariate normal and multivariate binary independent variables; we implement the simulation with SAS/IML programs. We then analyze the experimental design to see which factors of the distribution of independent variables affect various outcomes.;As a result, we recommend the best {dollar}\alpha{dollar} level for the {dollar}\chi\sbsp{lcub}(\alpha){rcub}{lcub}2{rcub}{dollar} stopping criterion. Second, we compare the order of variables selected by different selection criteria. Third, we investigate the effects of different structures of predictor variables on ARR, {dollar}\\omega{dollar}, and ERR. Fourth, we compare the sizes of subset models determined by different stopping criteria. Finally, we compare the performances of selection and stopping criteria in terms of ERR
Search algorithms for regression test case prioritization
Regression testing is an expensive, but important, process. Unfortunately, there may be insufficient resources to allow for the re-execution of all test cases during regression testing. In this situation, test case prioritisation techniques aim to improve the effectiveness of regression testing, by ordering the test cases so that the most beneficial are executed first. Previous work on regression test case prioritisation has focused on Greedy Algorithms. However, it is known that these algorithms may produce sub-optimal results, because they may construct results that denote only local minima within the search space. By contrast, meta-heuristic and evolutionary search algorithms aim to avoid such problems. This paper presents results from an empirical study of the application of several greedy, meta-heuristic and evolutionary search algorithms to six programs, ranging from 374 to 11,148 lines of code for 3 choices of fitness metric. The paper addresses the problems of choice of fitness metric, characterisation of landscape modality and determination of the most suitable search technique to apply. The empirical results replicate previous results concerning Greedy Algorithms. They shed light on the nature of the regression testing search space, indicating that it is multi-modal. The results also show that Genetic Algorithms perform well, although Greedy approaches are surprisingly effective, given the multi-modal nature of the landscape
Predicting regression test failures using genetic algorithm-selected dynamic performance analysis metrics
A novel framework for predicting regression test failures is proposed. The basic principle embodied in the framework is to use performance analysis tools to capture the runtime behaviour of a program as it executes each test in a regression suite. The performance information is then used to build a dynamically predictive model of test outcomes. Our framework is evaluated using a genetic algorithm for dynamic metric selection in combination with state-of-the-art machine learning classifiers. We show that if a program is modified and some tests subsequently fail, then it is possible to predict with considerable accuracy which of the remaining tests will also fail which can be used to help prioritise tests in time constrained testing environments
Recommended from our members
An empirical investigation into the impact of refactoring on regression testing
It is widely believed that refactoring improves software quality and developer’s productivity by making it easier to maintain and understand software systems. On the other hand, some believe that refactoring has the risk of functionality regression and increased testing cost. This paper investigates the impact of refactoring edits on regression tests using the version history of Java open source projects: (1) Are there adequate regression tests for refactoring in practice? (2) How many of existing regression tests are relevant to refactoring edits and thus need to be re-run for the new version? (3) What proportion of failure-inducing changes are relevant to refactorings? By using a refactoring reconstruction analysis and a change impact analysis in tandem, we investigate the relationship between the types and locations of refactoring edits identified by RefFinder and the affecting changes and affected tests identified by the FaultTracer change impact analysis. The results on three open source projects, JMeter, XMLSecurity, and ANT, show that only 22% of refactored methods and fields are tested by existing regression tests. While refactorings only constitutes 8% of atomic changes, 38% of affected tests are relevant to refactorings. Furthermore, refactorings are involved in almost a half of failed test cases. These results call for new automated regression test augmentation and selection techniques for validating refactoring edits.Electrical and Computer Engineerin
Selection of Statistical Software for Solving Big Data Problems for Teaching
The need for analysts with expertise in big data software is becoming more apparent in 4 today’s society. Unfortunately, the demand for these analysts far exceeds the number 5 available. A potential way to combat this shortage is to identify the software sought by 6 employers and to align this with the software taught by universities. This paper will 7 examine multiple data analysis software – Excel add-ins, SPSS, SAS, Minitab, and R – and 8 it will outline the cost, training, statistical methods/tests/uses, and specific uses within 9 industry for each of these software. It will further explain implications for universities and 10 students (PDF
- …