2,266 research outputs found
SourcererCC: Scaling Code Clone Detection to Big Code
Despite a decade of active research, there is a marked lack in clone
detectors that scale to very large repositories of source code, in particular
for detecting near-miss clones where significant editing activities may take
place in the cloned code. We present SourcererCC, a token-based clone detector
that targets three clone types, and exploits an index to achieve scalability to
large inter-project repositories using a standard workstation. SourcererCC uses
an optimized inverted-index to quickly query the potential clones of a given
code block. Filtering heuristics based on token ordering are used to
significantly reduce the size of the index, the number of code-block
comparisons needed to detect the clones, as well as the number of required
token-comparisons needed to judge a potential clone.
We evaluate the scalability, execution time, recall and precision of
SourcererCC, and compare it to four publicly available and state-of-the-art
tools. To measure recall, we use two recent benchmarks, (1) a large benchmark
of real clones, BigCloneBench, and (2) a Mutation/Injection-based framework of
thousands of fine-grained artificial clones. We find SourcererCC has both high
recall and precision, and is able to scale to a large inter-project repository
(250MLOC) using a standard workstation.Comment: Accepted for publication at ICSE'16 (preprint, unrevised
Statically Checking Web API Requests in JavaScript
Many JavaScript applications perform HTTP requests to web APIs, relying on
the request URL, HTTP method, and request data to be constructed correctly by
string operations. Traditional compile-time error checking, such as calling a
non-existent method in Java, are not available for checking whether such
requests comply with the requirements of a web API. In this paper, we propose
an approach to statically check web API requests in JavaScript. Our approach
first extracts a request's URL string, HTTP method, and the corresponding
request data using an inter-procedural string analysis, and then checks whether
the request conforms to given web API specifications. We evaluated our approach
by checking whether web API requests in JavaScript files mined from GitHub are
consistent or inconsistent with publicly available API specifications. From the
6575 requests in scope, our approach determined whether the request's URL and
HTTP method was consistent or inconsistent with web API specifications with a
precision of 96.0%. Our approach also correctly determined whether extracted
request data was consistent or inconsistent with the data requirements with a
precision of 87.9% for payload data and 99.9% for query data. In a systematic
analysis of the inconsistent cases, we found that many of them were due to
errors in the client code. The here proposed checker can be integrated with
code editors or with continuous integration tools to warn programmers about
code containing potentially erroneous requests.Comment: International Conference on Software Engineering, 201
An introduction to Graph Data Management
A graph database is a database where the data structures for the schema
and/or instances are modeled as a (labeled)(directed) graph or generalizations
of it, and where querying is expressed by graph-oriented operations and type
constructors. In this article we present the basic notions of graph databases,
give an historical overview of its main development, and study the main current
systems that implement them
Approach for testing the extract-transform-load process in data warehouse systems, An
2018 Spring.Includes bibliographical references.Enterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing. In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions. The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties. We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data
Recommended from our members
Enhancing recall and precision of web search using genetic algorithm
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Due to rapid growth of the number of Web pages, web users encounter two main problems, namely: many of the retrieved documents are not related to the user query which is called low precision, and many of relevant documents have not been retrieved yet which is called low recall. Information Retrieval (IR) is an essential and useful technique for Web search; thus, different approaches and techniques are developed. Because of its parallel mechanism with high-dimensional space, Genetic Algorithm (GA)
has been adopted to solve many of optimization problems where IR is one of them. This thesis proposes searching model which is based on GA to retrieve HTML
documents. This model is called IR Using GA or IRUGA. It is composed of two main units. The first unit is the document indexing unit to index the HTML documents. The second unit is the GA mechanism which applies selection, crossover, and mutation operators to produce the final result, while specially designed fitness function is applied to evaluate the documents. The performance of IRUGA is investigated using the speed of convergence of the retrieval process, precision at rank N, recall at rank N, and precision at recall N. In addition, the proposed fitness function is compared experimentally with Okapi-BM25 function and Bayesian inference network model function. Moreover, IRUGA is compared with traditional IR using the same fitness function to examine the performance in terms of time required by each technique to retrieve the documents. The new techniques
developed for document representation, the GA operators and the fitness function managed to achieves an improvement over 90% for the recall and precision measures. And the relevance of the retrieved document is much higher than that retrieved by the other models. Moreover, a massive comparison of techniques applied to GA operators is performed by highlighting the strengths and weaknesses of each existing technique of GA operators. Overall, IRUGA is a promising technique in Web search domain that provides a high quality search results in terms of recall and precision
A Framework for Genetic Algorithms Based on Hadoop
Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in
many real-world applications. The sequential execution of GAs requires
considerable computational power both in time and resources. Nevertheless, GAs
are naturally parallel and accessing a parallel platform such as Cloud is easy
and cheap. Apache Hadoop is one of the common services that can be used for
parallel applications. However, using Hadoop to develop a parallel version of
GAs is not simple without facing its inner workings. Even though some
sequential frameworks for GAs already exist, there is no framework supporting
the development of GA applications that can be executed in parallel. In this
paper is described a framework for parallel GAs on the Hadoop platform,
following the paradigm of MapReduce. The main purpose of this framework is to
allow the user to focus on the aspects of GA that are specific to the problem
to be addressed, being sure that this task is going to be correctly executed on
the Cloud with a good performance. The framework has been also exploited to
develop an application for Feature Subset Selection problem. A preliminary
analysis of the performance of the developed GA application has been performed
using three datasets and shown very promising performance
- …