21,831 research outputs found

    Detecting Coordination Problems in Collaborative Software Development Environments

    Get PDF
    Software development is rarely an individual effort and generally involves teams of developers collaborating to generate good reliable code. Among the software code there exist technical dependencies that arise from software components using services from other components. The different ways of assigning the design, development, and testing of these software modules to people can cause various coordination problems among them. We claim\ud that the collaboration of the developers, designers and testers must be related to and governed by the technical task structure. These collaboration practices are handled in what we call Socio-Technical Patterns.\ud The TESNA project (Technical Social Network Analysis) we report on in this paper addresses this issue. We propose a method and a tool that a project manager can use in order to detect the socio-technical coordination problems. We test the method and tool in a case study of a small and innovative software product company

    Evidence-based decision support for pediatric rheumatology reduces diagnostic errors.

    Get PDF
    BACKGROUND: The number of trained specialists world-wide is insufficient to serve all children with pediatric rheumatologic disorders, even in the countries with robust medical resources. We evaluated the potential of diagnostic decision support software (DDSS) to alleviate this shortage by assessing the ability of such software to improve the diagnostic accuracy of non-specialists. METHODS: Using vignettes of actual clinical cases, clinician testers generated a differential diagnosis before and after using diagnostic decision support software. The evaluation used the SimulConsult® DDSS tool, based on Bayesian pattern matching with temporal onset of each finding in each disease. The tool covered 5405 diseases (averaging 22 findings per disease). Rheumatology content in the database was developed using both primary references and textbooks. The frequency, timing, age of onset and age of disappearance of findings, as well as their incidence, treatability, and heritability were taken into account in order to guide diagnostic decision making. These capabilities allowed key information such as pertinent negatives and evolution over time to be used in the computations. Efficacy was measured by comparing whether the correct condition was included in the differential diagnosis generated by clinicians before using the software ( unaided ), versus after use of the DDSS ( aided ). RESULTS: The 26 clinicians demonstrated a significant reduction in diagnostic errors following introduction of the software, from 28% errors while unaided to 15% using decision support (p \u3c 0.0001). Improvement was greatest for emergency medicine physicians (p = 0.013) and clinicians in practice for less than 10 years (p = 0.012). This error reduction occurred despite the fact that testers employed an open book approach to generate their initial lists of potential diagnoses, spending an average of 8.6 min using printed and electronic sources of medical information before using the diagnostic software. CONCLUSIONS: These findings suggest that decision support can reduce diagnostic errors and improve use of relevant information by generalists. Such assistance could potentially help relieve the shortage of experts in pediatric rheumatology and similarly underserved specialties by improving generalists\u27 ability to evaluate and diagnose patients presenting with musculoskeletal complaints. TRIAL REGISTRATION: ClinicalTrials.gov ID: NCT02205086

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in System and Integration Testing

    Full text link
    Driven by new software development processes and testing in clouds, system and integration testing nowadays tends to produce enormous number of alarms. Such test alarms lay an almost unbearable burden on software testing engineers who have to manually analyze the causes of these alarms. The causes are critical because they decide which stakeholders are responsible to fix the bugs detected during the testing. In this paper, we present a novel approach that aims to relieve the burden by automating the procedure. Our approach, called Cause Analysis Model, exploits information retrieval techniques to efficiently infer test alarm causes based on test logs. We have developed a prototype and evaluated our tool on two industrial datasets with more than 14,000 test alarms. Experiments on the two datasets show that our tool achieves an accuracy of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per cause analysis. Due to the attractive experimental results, our industrial partner, a leading information and communication technology company in the world, has deployed the tool and it achieves an average accuracy of 72% after two months of running, nearly three times more accurate than a previous strategy based on regular expressions.Comment: 12 page

    Automated Web Application Testing Using Improvement Selenium

    Get PDF
    The basis of this project is to develop an automated web application testing tool using the enhancement of Selenium which is an open source testing tool. The tool should able to handle automated tests with a group of testers. The current testing tools are proprietary tool and not all test engineers expert or have a deep knowledge in programming which is required to create automated test case. The main problem in testing is time consuming where some of testers are still using manual testing. The scope of study is to develop a tool for software tester who focuses on functional testing and the tool can execute test cases on different platforms which are Windows, Linux and Mac. Each platform has different browsers which are Firefox, Internet Explorer and Chrome. A qualitative research is done in this project by interviewing several software test engineers who are familiar with automated testing and a framework has been designed in order to fulfill the requirements and to cater the problems that are currently facing. This framework is developed and coded by using Eclipse IDE, Apache ANT and TestNG. As a result, this testing tool is using the combination of Selenium Grid and TestNG in order to create a template for software tester to use it easily. This framework is compatible to all browsers and platforms that are commonly used by Internet users
    • …
    corecore