3,439 research outputs found
Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development
Mobile devices and platforms have become an established target for modern
software developers due to performant hardware and a large and growing user
base numbering in the billions. Despite their popularity, the software
development process for mobile apps comes with a set of unique, domain-specific
challenges rooted in program comprehension. Many of these challenges stem from
developer difficulties in reasoning about different representations of a
program, a phenomenon we define as a "language dichotomy". In this paper, we
reflect upon the various language dichotomies that contribute to open problems
in program comprehension and development for mobile apps. Furthermore, to help
guide the research community towards effective solutions for these problems, we
provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference
on Program Comprehension (ICPC'18
Comparative evaluation of genetic algorithm-based test case optimization
Software testing is a crucial phase in software development process although it consumes more time and cost of software development. Researchers have proposed several approaches focusing on helping software testers to reduce the execution time and cost of the testing process. Test case optimization is a multi-objective approach that has become one of the best solutions to overcome these problems. Test case optimization focusing on reducing the number of test cases in the test suite that may reduce the overall testing time, cost and effort of software testers especially in regression testing. This paper presents the comparative evaluation between test case optimization techniques that are based on Genetic Algorithm (GA). The evaluation is based on five criteria i.e. technique objectives, applied fitness function, contributions, the percentage of the reduced test cases, fault detection capability, and technique limitations. The evaluation results able identify the gaps in the existing GAbased test case optimization approaches and provide insight in determining the potential research directions in this area.Keywords: Test case optimization, regression testing, multi-objectives, genetic algorithm, software testin
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Artificial intelligence applied to software testing:a tertiary study
Context: Artificial intelligence (AI) methods and models have extensively been applied to support different phases of the software development lifecycle, including software testing (ST). Several secondary studies investigated the interplay between AI and ST but restricted the scope of the research to specific domains or sub-domains within either area.Objective: This research aims to explore the overall contribution of AI to ST, while identifying the most popular applications and potential paths for future research directions.Method: We executed a tertiary study following well-established guidelines for conducting systematic literature mappings in software engineering and for answering nine research questions.Results: We identified and analyzed 20 relevant secondary studies. The analysis was performed by drawing from well-recognized AI and ST taxonomies and mapping the selected studies according to them. The resulting mapping and discussions provide extensive and detailed information on the interplay between AI and ST.Conclusion: The application of AI to support ST is a well-consolidated and growing interest research topic. The mapping resulting from our study can be used by researchers to identify opportunities for future research, and by practitioners looking for evidence-based information on which AI-supported technology to possibly adopt in their testing processes
What attracts vehicle consumers’ buying:A Saaty scale-based VIKOR (SSC-VIKOR) approach from after-sales textual perspective?
Purpose:
The increasingly booming e-commerce development has stimulated vehicle consumers to express individual reviews through online forum. The purpose of this paper is to probe into the vehicle consumer consumption behavior and make recommendations for potential consumers from textual comments viewpoint.
Design/methodology/approach:
A big data analytic-based approach is designed to discover vehicle consumer consumption behavior from online perspective. To reduce subjectivity of expert-based approaches, a parallel NaĂŻve Bayes approach is designed to analyze the sentiment analysis, and the Saaty scale-based (SSC) scoring rule is employed to obtain specific sentimental value of attribute class, contributing to the multi-grade sentiment classification. To achieve the intelligent recommendation for potential vehicle customers, a novel SSC-VIKOR approach is developed to prioritize vehicle brand candidates from a big data analytical viewpoint.
Findings:
The big data analytics argue that “cost-effectiveness” characteristic is the most important factor that vehicle consumers care, and the data mining results enable automakers to better understand consumer consumption behavior.
Research limitations/implications:
The case study illustrates the effectiveness of the integrated method, contributing to much more precise operations management on marketing strategy, quality improvement and intelligent recommendation.
Originality/value:
Researches of consumer consumption behavior are usually based on survey-based methods, and mostly previous studies about comments analysis focus on binary analysis. The hybrid SSC-VIKOR approach is developed to fill the gap from the big data perspective
Objectives, criteria and methods for using molecular genetic data in priority setting for conservation of animal genetic resources
The genetic diversity of the world!s livestock populations is decreasing, both within and
across breeds. A wide variety of factors has contributed to the loss, replacement or genetic
dilution of many local breeds. Genetic variability within the more common commercial
breeds has been greatly decreased by selectively intense breeding programmes. Conservation
of livestock genetic variability is thus important, especially when considering possible future
changes in production environments. The world has more than 7500 livestock breeds and
conservation of all of them is not feasible. Therefore, prioritization is needed. The objective of
this article is to review the state of the art in approaches for prioritization of breeds for
conservation, particularly those approaches that consider molecular genetic information,
and to identify any shortcomings that may restrict their application. The Weitzman method
was among the first and most well-known approaches for utilization of molecular genetic
information in conservation prioritization. This approach balances diversity and extinction
probability to yield an objective measure of conservation potential. However, this approach
was designed for decision making across species and measures diversity as distinctiveness.
For livestock, prioritization will most commonly be performed among breeds within species,
so alternatives that measure diversity as co-ancestry (i.e. also within-breed variability) have
been proposed. Although these methods are technically sound, their application has generally
been limited to research studies; most existing conservation programmes have
effectively primarily based decisions on extinction risk. The development of user-friendly
software incorporating these approaches may increase their rate of utilization
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
Mapping the Structure and Evolution of Software Testing Research Over the Past Three Decades
Background: The field of software testing is growing and rapidly-evolving.
Aims: Based on keywords assigned to publications, we seek to identify
predominant research topics and understand how they are connected and have
evolved.
Method: We apply co-word analysis to map the topology of testing research as
a network where author-assigned keywords are connected by edges indicating
co-occurrence in publications. Keywords are clustered based on edge density and
frequency of connection. We examine the most popular keywords, summarize
clusters into high-level research topics, examine how topics connect, and
examine how the field is changing.
Results: Testing research can be divided into 16 high-level topics and 18
subtopics. Creation guidance, automated test generation, evolution and
maintenance, and test oracles have particularly strong connections to other
topics, highlighting their multidisciplinary nature. Emerging keywords relate
to web and mobile apps, machine learning, energy consumption, automated program
repair and test generation, while emerging connections have formed between web
apps, test oracles, and machine learning with many topics. Random and
requirements-based testing show potential decline.
Conclusions: Our observations, advice, and map data offer a deeper
understanding of the field and inspiration regarding challenges and connections
to explore.Comment: To appear, Journal of Systems and Softwar
Development of a context-specific search engine, an executive information system, and a novel www ready external cost model
NJPIES is associated with Information Ecology and Sustainability, a holistic approach to environmental data collection, compilation, integration and provision that puts people, not technology, at the center of the environmental information world.
The first main goal of this project was to develop an algorithm and associated computer-based tool that could perform a lifecycle cost analysis for a model system. The application developed solved the primary problem associated with the lifecycle cost analysis of a product: it accounted for all costs (e.g., environmental costs such as ecological costs and health costs associated with emissions) of the activity. A lifecycle cost analysis attempts to identify, measure, and quantify the social costs of human activities such as manufacturing that are not considered with traditional accounting systems. The application developed will quantify, monetize, and rank the damage or external costs to the environment of certain types of emissions. We developed a preliminary algorithm and software and implemented it at two plants: load assembly pack operation at Iowa Army Ammunition Plant (IAAAP) and Armtec, a manufacturer of combustible cartridge cases.
The second main goal of this project is to act as a credible information-clearing house in pollution prevention (P2) and related environmental matters, and to educate the public and keep them aware of facts taking place in the environmental/manufacturing world. Intelligent search engines have been built to access these huge databases in human readable format and correlate the data to various reports providing information on the environmentally hazardous chemicals, releases, and facilities in different regions.
The third main goal is the enhancement of EnviroDaemon with a hierarchical information search interface. This project describes some approaches that locate information according to syntactic criteria, augmented by pragmatic aspects like the utilization of information in a certain context. The main emphasis of this project lies in the treatment of structured knowledge, where essential aspects about the topic of interest are encoded not only by the individual items, but also by their relationships among each other. Benefits of this approach are enhanced precision and approximate search in an already focused, context specific search engine for the environment
- …