9,006 research outputs found
Inspection and Test Process Integration Based on Explicit Test Prioritization Strategies
Today's software quality assurance techniques are often applied in isolation.
Consequently, synergies resulting from systematically integrating different
quality assurance activities are often not exploited. Such combinations promise
benefits, such as a reduction in quality assurance effort or higher defect
detection rates. The integration of inspection and testing, for instance, can
be used to guide testing activities. For example, testing activities can be
focused on defect-prone parts based upon inspection results. Existing
approaches for predicting defect-prone parts do not make systematic use of the
results from inspections. This article gives an overview of an integrated
inspection and testing approach, and presents a preliminary case study aiming
at verifying a study design for evaluating the approach. First results from
this preliminary case study indicate that synergies resulting from the
integration of inspection and testing might exist, and show a trend that
testing activities could be guided based on inspection results.Comment: 12 pages. The final publication is available at
http://link.springer.com/chapter/10.1007%2F978-3-642-27213-4_1
Evaluating the strategic plans of public libraries : an inspection-based approach
For public libraries, as with most organisations, effective strategic planning is critical to longevity, facilitating cohesive and coordinated responses to ever present and ever changing political, economic, social, and technological (PEST) forces which shape and influence direction. However, strategic planning is widely recognised as a challenging activity, which can be both time consuming and unproductive, and there exists limited guidance regarding how to evaluate documented and disseminated strategic plans, particularly within the not-for-profit sector. In response, this research proposes and tests an inspection-based approach to the evaluation of strategic plans, based upon a rubric specifying the key attributes of each of the core components of a plan, combined with an appropriate assessment scale. The rubric provides a method to identify and assess completeness of strategic plan, extending to qualitative assessment of communication aspects such as specification and terminology, and synergistic aspects such as cohesion and integration. The method is successfully trialled across the devolved Scottish public library sector with the strategic plans of 28 of the 32 regional networks evaluated. 17 of 28 plans (61%) were found to be incomplete and/or to contain contradictory or uncoordinated components, with it recommended that Scottish public libraries improve not only completeness of plans, but also their precision, specificity, explicitness, coordination, and consistency, and overall mapping to library services. Recommendations are made for further widespread application of the rubric
Technology for the Future: In-Space Technology Experiments Program, part 2
The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme
A Hierarchical, Fuzzy Inference Approach to Data Filtration and Feature Prioritization in the Connected Manufacturing Enterprise
The current big data landscape is one such that the technology and capability to capture and storage of data has preceded and outpaced the corresponding capability to analyze and interpret it. This has led naturally to the development of elegant and powerful algorithms for data mining, machine learning, and artificial intelligence to harness the potential of the big data environment. A competing reality, however, is that limitations exist in how and to what extent human beings can process complex information. The convergence of these realities is a tension between the technical sophistication or elegance of a solution and its transparency or interpretability by the human data scientist or decision maker. This dissertation, contextualized in the connected manufacturing enterprise, presents an original Fuzzy Approach to Feature Reduction and Prioritization (FAFRAP) approach that is designed to assist the data scientist in filtering and prioritizing data for inclusion in supervised machine learning models. A set of sequential filters reduces the initial set of independent variables, and a fuzzy inference system outputs a crisp numeric value associated with each feature to rank order and prioritize for inclusion in model training. Additionally, the fuzzy inference system outputs a descriptive label to assist in the interpretation of the feature’s usefulness with respect to the problem of interest. Model testing is performed using three publicly available datasets from an online machine learning data repository and later applied to a case study in electronic assembly manufacture. Consistency of model results is experimentally verified using Fisher’s Exact Test, and results of filtered models are compared to results obtained by the unfiltered sets of features using a proposed novel metric of performance-size ratio (PSR)
SCRUM-PSP: Embracing Process Agility and Discipline
Abstract—With the research and debates on software process, the mainstream software processes can be grouped into two categories, the plan-driven (disciplined) processes and the agile processes. In terms of the classification, personal software process (PSP) is a typical plan-driven process while SCRUM is an agile-style instance. Although they are distinct from each other per se, our research found that PSP and SCRUM may also complement each other when SCRUM provides an agile process management framework, and PSP provides the skills and disciplines that a qualified team member needs to estimate, plan and manage his/her job. This paper proposes an integrated process model, SCRUM-PSP, which combines the strengths of each. We also verified that this integrated process by adopting it into a real project environment where typical agile processes are favored, i.e. change-prone requirements, rapid development, fast delivery, etc. As a result, manageability and predictability which traditional plan-driven processes usually benefit can also be achieved. The work described in this paper is a worthy attempt to embrace both process agility and discipline. Keywords- PSP SCRUM Integratio
Integrating Computational Biology and Forward Genetics in Drosophila
Genetic screens are powerful methods for the discovery of gene–phenotype associations. However, a systems biology approach to genetics must leverage the massive amount of “omics” data to enhance the power and speed of functional gene discovery in vivo. Thus far, few computational methods for gene function prediction have been rigorously tested for their performance on a genome-wide scale in vivo. In this work, we demonstrate that integrating genome-wide computational gene prioritization with large-scale genetic screening is a powerful tool for functional gene discovery. To discover genes involved in neural development in Drosophila, we extend our strategy for the prioritization of human candidate disease genes to functional prioritization in Drosophila. We then integrate this prioritization strategy with a large-scale genetic screen for interactors of the proneural transcription factor Atonal using genomic deficiencies and mutant and RNAi collections. Using the prioritized genes validated in our genetic screen, we describe a novel genetic interaction network for Atonal. Lastly, we prioritize the whole Drosophila genome and identify candidate gene associations for ten receptor-signaling pathways. This novel database of prioritized pathway candidates, as well as a web application for functional prioritization in Drosophila, called Endeavour-HighFly, and the Atonal network, are publicly available resources. A systems genetics approach that combines the power of computational predictions with in vivo genetic screens strongly enhances the process of gene function and gene–gene association discovery
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Loo.py: transformation-based code generation for GPUs and CPUs
Today's highly heterogeneous computing landscape places a burden on
programmers wanting to achieve high performance on a reasonably broad
cross-section of machines. To do so, computations need to be expressed in many
different but mathematically equivalent ways, with, in the worst case, one
variant per target machine.
Loo.py, a programming system embedded in Python, meets this challenge by
defining a data model for array-style computations and a library of
transformations that operate on this model. Offering transformations such as
loop tiling, vectorization, storage management, unrolling, instruction-level
parallelism, change of data layout, and many more, it provides a convenient way
to capture, parametrize, and re-unify the growth among code variants. Optional,
deep integration with numpy and PyOpenCL provides a convenient computing
environment where the transition from prototype to high-performance
implementation can occur in a gradual, machine-assisted form
- …