10,563 research outputs found

    Cancer gene prioritization by integrative analysis of mRNA expression and DNA copy number data: a comparative review

    Get PDF
    A variety of genome-wide profiling techniques are available to probe complementary aspects of genome structure and function. Integrative analysis of heterogeneous data sources can reveal higher-level interactions that cannot be detected based on individual observations. A standard integration task in cancer studies is to identify altered genomic regions that induce changes in the expression of the associated genes based on joint analysis of genome-wide gene expression and copy number profiling measurements. In this review, we provide a comparison among various modeling procedures for integrating genome-wide profiling data of gene copy number and transcriptional alterations and highlight common approaches to genomic data integration. A transparent benchmarking procedure is introduced to quantitatively compare the cancer gene prioritization performance of the alternative methods. The benchmarking algorithms and data sets are available at http://intcomp.r-forge.r-project.orgComment: PDF file including supplementary material. 9 pages. Preprin

    What attracts vehicle consumers’ buying:A Saaty scale-based VIKOR (SSC-VIKOR) approach from after-sales textual perspective?

    Get PDF
    Purpose: The increasingly booming e-commerce development has stimulated vehicle consumers to express individual reviews through online forum. The purpose of this paper is to probe into the vehicle consumer consumption behavior and make recommendations for potential consumers from textual comments viewpoint. Design/methodology/approach: A big data analytic-based approach is designed to discover vehicle consumer consumption behavior from online perspective. To reduce subjectivity of expert-based approaches, a parallel Naïve Bayes approach is designed to analyze the sentiment analysis, and the Saaty scale-based (SSC) scoring rule is employed to obtain specific sentimental value of attribute class, contributing to the multi-grade sentiment classification. To achieve the intelligent recommendation for potential vehicle customers, a novel SSC-VIKOR approach is developed to prioritize vehicle brand candidates from a big data analytical viewpoint. Findings: The big data analytics argue that “cost-effectiveness” characteristic is the most important factor that vehicle consumers care, and the data mining results enable automakers to better understand consumer consumption behavior. Research limitations/implications: The case study illustrates the effectiveness of the integrated method, contributing to much more precise operations management on marketing strategy, quality improvement and intelligent recommendation. Originality/value: Researches of consumer consumption behavior are usually based on survey-based methods, and mostly previous studies about comments analysis focus on binary analysis. The hybrid SSC-VIKOR approach is developed to fill the gap from the big data perspective

    Test Case Prioritization Using Swarm Intelligence Algorithm to Improve Fault Detection and Time for Web Application

    Get PDF
    Prioritizing test cases based on several parameters where important ones are executed first is known as test case prioritization (TCP). Code coverage, functionality, and features are all possible factors of TCP for detecting bugs in software as early as possible. This research was carried out to test and compare the effectiveness Swarm Intelligence algorithms, where Artificial Bee Colony (ABC) and Ant Colony Optimization (ACO) algorithms were implemented to find the fault detected and execution time as these are the curial aspects in software testing to ensure good quality products are produced within the timeline. As web applications are commonly used by a board population, this research was carried out on an Online Shopping application represented as Case Study One and Education Administrative application known as Case Study Two. In recent years, TCP has been implemented widely, but none has implemented on web application which was conducted to fill the gaps and produce a new contribution in this area. The outcome was compared using Average Percentage Fault Detected (APFD) and execution time. For Case Study One, the APFD value was 0.80 and 0.71 while the execution time was 8.64 seconds and 0.69 seconds respectively for ABC and ACO. For Case Study Two, the APFD values were 0.81 and 0.64 while the execution time was 8.83 seconds and 1.22 seconds for ABC and ACO. It was seen that both algorithms performed well in their respective ways. ABC had shown to give a higher value for APFD while ACO had converged faster for execution time

    A test case generation framework based on UML statechart diagram

    Get PDF
    Early software fault detection offers more flexibility to correct errors in the early development stages. Unfortunately, existing studies in this domain are not sufficiently comprehensive in describing the major processes of the automated test case generation. Furthermore, the algorithms used for test case generation are not provided or well described. Current studies also hardly address loops and parallel paths issues, and achieved low coverage criteria. Therefore, this study proposes a test case generation framework that generates minimized and prioritized test cases from UML statechart diagram with higher coverage criteria. This study, conducted a review of the previous research to identify the issues and gaps related to test case generation, model-based testing, and coverage criteria. The proposed framework was designed from the gathered information based on the reviews and consists of eight components that represent a comprehensive test case generation processes. They are relation table, relation graph, consistency checking, test path minimization, test path prioritization, path pruning, test path generation, and test case generation. In addition, a prototype to implement the framework was developed. The evaluation of the framework was conducted in three phases: prototyping, comparison with previous studies, and expert review. The results reveal that the most suitable coverage criteria for UML statechart diagram are all-states coverage, all-transitions coverage, alltransition-pairs coverage, and all-loop-free-paths coverage. Furthermore, this study achieves higher coverage criteria in all coverage criteria, except for all-state coverage, when compared with the previous studies. The results of the experts’ review show that the framework is practical, easy to implement due to it is suitability to generate the test cases. The proposed algorithms provide correct results, and the prototype is able to generate test case effectively. Generally, the proposed system is well accepted by experts owing to its usefulness, usability, and accuracy. This study contributes to both theory and practice by providing an early alternative test case generation framework that achieves high coverage and can effectively generate test cases from UML statechart diagrams. This research adds new knowledge to the software testing field, especially for testing processes in the model-based techniques, testing activity, and testing tool support

    Understanding requirements dependency in requirements prioritization: a systematic literature review

    Get PDF
    Requirement prioritization (RP) is a crucial task in managing requirements as it determines the order of implementation and, thus, the delivery of a software system. Improper RP may cause software project failures due to over budget and schedule as well as a low-quality product. Several factors influence RP. One of which is requirements dependency. Handling inappropriate handling of requirements dependencies can lead to software development failures. If a requirement that serves as a prerequisite for other requirements is given low priority, it affects the overall project completion time. Despite its importance, little is known about requirements dependency in RP, particularly its impacts, types, and techniques. This study, therefore, aims to understand the phenomenon by analyzing the existing literature. It addresses three objectives, namely, to investigate the impacts of requirements dependency on RP, to identify different types of requirements dependency, and to discover the techniques used for requirements dependency problems in RP. To fulfill the objectives, this study adopts the Systematic Literature Review (SLR) method. Applying the SLR protocol, this study selected forty primary articles, which comprise 58% journal papers, 32% conference proceedings, and 10% book sections. The results of data synthesis indicate that requirements dependency has significant impacts on RP, and there are a number of requirements dependency types as well as techniques for addressing requirements dependency problems in RP. This research discovered various techniques employed, including the use of Graphs for RD visualization, Machine Learning for handling large-scale RP, decision making for multi-criteria handling, and optimization techniques utilizing evolutionary algorithms. The study also reveals that the existing techniques have encountered serious limitations in terms of scalability, time consumption, interdependencies of requirements, and limited types of requirement dependencies

    How Much Are Machine Assistants Worth? Willingness to Pay for Machine Learning-Based Software Testing

    Get PDF
    Machine Learning (ML) technologies have become the foundation of a plethora of products and services. While the economic potential of such ML-infused solutions has become irrefutable, there is still uncertainty on pricing. Currently, software testing is one area to benefit from ML services assisting in the creation of test cases; a task both complex and demanding human-like outputs. Yet, little is known on the willingness to pay of users, inhibiting the suppliers\u27 incentive to develop suitable tools. To provide insights into desired features and willingness to pay for such ML-based tools, we perform a choice-based conjoint analysis with 119 participants in Germany. Our results show that a high level of accuracy is particularly important for users, followed by ease of use and integration into existing environments. Thus, we not only guide future developers on which attributes to prioritize but also which characteristics of ML-based services are relevant for future research
    • …
    corecore