303,976 research outputs found
On the Effectiveness of Unit Tests in Test-driven Development
Background: Writing unit tests is one of the primary activities
in test-driven development. Yet, the existing reviews report few
evidence supporting or refuting the effect of this development approach
on test case quality. Lack of ability and skills of developers to
produce sufficiently good test cases are also reported as limitations
of applying test-driven development in industrial practice.
Objective: We investigate the impact of test-driven development
on the effectiveness of unit test cases compared to an incremental
test last development in an industrial context.
Method: We conducted an experiment in an industrial setting
with 24 professionals. Professionals followed the two development
approaches to implement the tasks. We measure unit test effectiveness
in terms of mutation score. We also measure branch and
method coverage of test suites to compare our results with the
literature.
Results: In terms of mutation score, we have found that the test
cases written for a test-driven development task have a higher
defect detection ability than test cases written for an incremental
test-last development task. Subjects wrote test cases that cover
more branches on a test-driven development task compared to the
other task. However, test cases written for an incremental test-last
development task cover more methods than those written for the
second task.
Conclusion: Our findings are different from previous studies
conducted at academic settings. Professionals were able to perform
more effective unit testing with test-driven development. Furthermore,
we observe that the coverage measure preferred in academic
studies reveal different aspects of a development approach. Our
results need to be validated in larger industrial contexts.Istanbul Technical University
Scientific Research Projects (MGA-2017-40712), and the
Academy of Finland (Decision No. 278354)
An Investigation into Improving Test Effectiveness for Embedded Software
This thesis reports on the investigation of the effectiveness of software testing on
embedded systems. The aim was to improve confidence in the current methods
employed or to find new methods which could improve the hit rate of defects found
before software is sent to a customer. We investigate previous work into software
testing effectives and various black box testing methods. There are various Black Box
testing methodologies that can be employed to detect errors in systems with varying
degrees of success. In this thesis we investigate the transformation of the white box
testing technique of Definition Use (DU) Path testing using a RESOLVE like
specification, to be applied as black box test method. We do not use RESOLVE it
self, instead we defined our own method of automatic test generation based on the
principles of RESOLVE. Then we compare this method to more commonly used
requirements driven test selection, and pure boundary value analysis (BVA) testing
techniques. The results reported in this thesis indicate that BVA and DU test selection
methods create tests that are covered by unit and integration tests. The current
requirements driven test cases create tests with a combination of features working in
tandem. It was found that combination of features was more likely to find defects
because developers tests had a lesser focus on this area. The tests generated by the
BVA and DU test selection methods did not find any defects that their respective
methods were intended to find. This is due to the development team already having
tests that covered these areas and defects had been fixed before system tests could be
run. Based on the fact that the current test selection methods find defects and the
methods we investigated do not, this adds confidence that the system test approach to
testing is effective. The investigation of defects found showed that timing related
errors are common and that a test selection method designed to find timing related
defects would be worth investigating. The investigation also revealed a useful method
in automatic generation of test cases. The RESOLVE like specification was used to
apply a DU testing as a black box test method. This method showed to be more time
efficient at generating test cases than the existing requirements driven approach.
Although the test cases did not reveal significant defects, due to the overlap with
integration testing, it could be a useful method for developers to generate test cases
Opportunities for improving the efficiency of paediatric HIV treatment programmes
Objectives: To conduct two economic analyses addressing whether to: routinely monitor HIV-infected children on antiretroviral therapy (ART) clinically or with laboratory tests; continue or stop cotrimoxazole prophylaxis when children become stabilized on ART. Design and methods: The ARROW randomized trial investigated alternative strategies to deliver paediatric ART and cotrimoxazole prophylaxis in 1206 Ugandan/Zimbabwean children. Incremental cost-effectiveness and value of implementation analyses were undertaken. Scenario analyses investigated whether laboratory monitoring (CD4 tests for efficacy monitoring; haematology/biochemistry for toxicity) could be tailored and targeted to be delivered cost-effectively. Cotrimoxazole use was examined in malaria-endemic and non-endemic settings. Results: Using all trial data, clinical monitoring delivered similar health outcomes to routine laboratory monitoring, but at a reduced cost, so was cost-effective. Continuing cotrimoxazole improved health outcomes at reduced costs. Restricting routine CD4+ monitoring to after 52 weeks following ART initiation and removing toxicity testing was associated with an incremental cost-effectiveness ratio of 769/QALY). Committing resources to improve cotrimoxazole implementation appears cost-effective. A healthcare system that could pay 12.0 per patient-year to ensure continued provision of cotrimoxazole. Conclusion: Clinically driven monitoring of ART is cost-effective in most circumstances. Routine laboratory monitoring is generally not cost-effective at current prices, except possibly CD4 testing amongst adolescents initiating ART. Committing resources to ensure continued provision of cotrimoxazole in health facilities is more likely to represent an efficient use of resources
Foreign capital in a growth model
Within an endogenous growth framework, this paper empirically investigates the impact of financial capital on economic growth for a panel of 60 developing countries, through the channel of domestic capital formation. By estimating the model for different income groups, it is found that while private FDI flows exert beneficial complementarity effects on the domestic capital formation across all income-group countries, the official financial flows contribute to increasing investment in the middle income economies, but not in the low income countries. The latter appears to demonstrate that the aid-growth nexus is supported in the middle income countries, whereas the misallocation of official inflows is more likely to exist in the low income countries, suggesting that aid effectiveness remains conditional on the domestic policy environment
Cost effectiveness analysis of clinically driven versus routine laboratory monitoring of antiretroviral therapy in Uganda and Zimbabwe.
BACKGROUND: Despite funding constraints for treatment programmes in Africa, the costs and economic consequences of routine laboratory monitoring for efficacy and toxicity of antiretroviral therapy (ART) have rarely been evaluated. METHODS: Cost-effectiveness analysis was conducted in the DART trial (ISRCTN13968779). Adults in Uganda/Zimbabwe starting ART were randomised to clinically-driven monitoring (CDM) or laboratory and clinical monitoring (LCM); individual patient data on healthcare resource utilisation and outcomes were valued with primary economic costs and utilities. Total costs of first/second-line ART, routine 12-weekly CD4 and biochemistry/haematology tests, additional diagnostic investigations, clinic visits, concomitant medications and hospitalisations were considered from the public healthcare sector perspective. A Markov model was used to extrapolate costs and benefits 20 years beyond the trial. RESULTS: 3316 (1660LCM;1656CDM) symptomatic, immunosuppressed ART-naive adults (median (IQR) age 37 (32,42); CD4 86 (31,139) cells/mm(3)) were followed for median 4.9 years. LCM had a mean 0.112 year (41 days) survival benefit at an additional mean cost of 7386 [3277,dominated] per life-year gained and 3.78 to become cost-effective (<3xper-capita GDP, following WHO benchmarks). CD4 monitoring at current costs as undertaken in DART was not cost-effective in the long-term. CONCLUSIONS: There is no rationale for routine toxicity monitoring, which did not affect outcomes and was costly. Even though beneficial, there is little justification for routine 12-weekly CD4 monitoring of ART at current test costs in low-income African countries. CD4 monitoring, restricted to the second year on ART onwards, could be cost-effective with lower cost second-line therapy and development of a cheaper, ideally point-of-care, CD4 test
Evaluating the Impact of Critical Factors in Agile Continuous Delivery Process: A System Dynamics Approach
Continuous Delivery is aimed at the frequent delivery of good quality software in a speedy, reliable and efficient fashion – with strong emphasis on automation and team collaboration. However, even with this new paradigm, repeatability of project outcome is still not guaranteed: project performance varies due to the various interacting and inter-related factors in the Continuous Delivery 'system'. This paper presents results from the investigation of various factors, in particular agile practices, on the quality of the developed software in the Continuous Delivery process. Results show that customer involvement and the cognitive ability of the QA have the most significant individual effects on the quality of software in continuous delivery
- …