22 research outputs found

    Generic code clone detection model for java applications

    Get PDF
    Code clone is a common term used for codes that are repeated multiple times in a program. There are Type 1, Type 2, Type 3 and Type 4 code clones. Various code clone detection approaches and models have been used to detect a code clone. However, a major challenge faced in detecting code clone using these models is the lack of generality in detecting all clone types. To address this problem, Generic Code Clone Detection (GCCD) model that consists of five processes which are Preprocessing, Transformation, Parameterization, Categorization and Match Detection process is proposed. Initially, a pre-processing process produces source units through the application of five combinatorial rules. This is followed by the transformation process to produce transformed source units based on the letter to number substitution concept. Next, a parameterization process produces parameters used in categorization and match detection process. Next, a categorization process groups the source units into pools. Finally, a match detection process uses a hybrid exact matching with Euclidean distance to detect the clones. Based on these processes, a prototype of the GCCD was developed using Netbeans 8.0. The model was compared with the Generic Pipeline Model (GPM). The comparisons showed that the GCCD was able to detect clone pairs of Type-1 until Type-4 while the GPM was able to detect clone pair for Type-1 only. Furthermore, the GCCD prototype was empirically tested with Bellons benchmark data and it was able to detect clones in Java applications with up to 203,000 line of codes. As a conclusion, the GCCD model is able to overcome the lack of generality in detecting all code clone types by detecting Type 1, Type 2, Type 3 and Type 4 clones

    Enhancing a hybrid pre-processing and transformation process for code clone detection in .Net application

    Get PDF
    Pre-processing and transformation are the first two common processes that occur in a code clone detection process. The purpose of these two processes is to transform the source codes into a more representable form that can be used later on as input for code clone detection. Main issue arises in both of these processes is the application of the pre-processing and transformation rules might cause loss of critical information thus affecting the code clone detection results. Therefore, this work proposes a combination pre-processing and transformation process that can produce a better source unit representation of .Net platform source code which is C#. Net and VB.Net by enhancing an existing work that was done on Java language without affecting the critical information in the source code. The proposed enhancement was tested and the result showed that the proposed work was able to produce the expected source unit for the .Net platform languages together

    A Comprehensive Framework for Fire Evacuation Modelling

    Get PDF
    Strategic planning for the evacuation of occupants from buildings becomes crucial during disaster response operations, especially in the context of fire emergencies that pose a direct threat to human lives. This study addresses the specific challenges associated with fire evacuation in smart buildings within the Internet of Things environment. Despite their enhanced connectivity and accessibility, these buildings are still vulnerable to crises; therefore, efficient and swift occupant evacuation planning is necessary. Developing a successful evacuation plan for these situations calls for in-depth knowledge of smart building characteristics, evacuation factors, and skilful modelling. In order to overcome this problem, the research proposed a metamodel approach that serves as a modelling grammar and syntax for systematic design. The metamodel is constructed based on common evacuation model terminology and fire emergency variables. Through the use of a graphical editor and model transformation, the metamodel undergoes validation using a model-checking technique. In abstract modelling, this validation technique provides crucial insights into the accuracy and completeness of the metamodel in enhancing the resilience of smart building evacuation systems

    A Systematic Survey on the Research of AI-predictive Models for Wastewater Treatment Processes

    Get PDF
    Context: To increase the efficiency of wastewater treatment, modeling and optimization of pollutant removal processes are the best solutions. The relationship between input and output parameters in wastewater treatment processes (WWTP) is a complicated one, and it is difficult for designing models using statistics. Artificial Intelligence (AI) models are generally more flexible when compared with statistical models while modeling complex datasets with nonlinearity and missing data. Objective: Studies on WWTP of AI-based are increasing day by day. Therefore, it is crucial to systematically review the AI techniques available which are implemented for WWTP. Such kind of review helps for classifying the techniques that are invented and helps to identify challenges as well as gaps for future studies. Lastly, can sort out the best AI technique to design predictive models for WWTP. Method: With the help of the most relevant digital libraries, the total number of papers collected is 1222 which are based on AI modeling on WWTP. Then the filtration of the papers is mainly based on the inclusion and exclusion criteria. Also, to identify new relevant papers, snowballing is the other technique applied. Results: Finally selected 76 primary papers to reach the result were published between 2004 and 2020. Conclusion: ANN with MLP approach on BP algorithm become a supervised neural network called BPNN is the most used AI modeling for WWTP and around 40% of the experimental research done with BPNN. Then there are some limitations on AI modeling of WWTP using photoreforming which is the current study of WWTP represents a promising path for generating renewable and sustainable energy resources like chemicals and fuels

    Bibliometrical analysis of workers quality on crowdsourcing based on VOS viewer

    Get PDF
    Data collection activities are used for various needs in human life, such as business, education,health, transportation, and various other services. Data collection techniques through human assistance are also called crowdsourcing. Crowdsourcing is a distributed problem-solving mechanism that is available to the general public over the Internet. Crowdsourcing is one way to collect data and analyze data in big data. Some of the problems in collecting data from workers are as follows data received from workers have a high potential for noise because there has been no selection and validation on the quality of workers. However, it is not certain what factors can affect the quality of workers in crowdsourcing. This paper tries to explore the criteria that become the quality of workers in crowdsourcing activities by using Bibliometric Mapping by using tools such as VOS viewer

    A review on predictive models designed from artificial intelligence techniques in the wastewater treatment process

    Get PDF
    Modeling and optimization of pollutant removal processes are the best solutions to increase the efficiency of wastewater treatment. The relationship between input and output parameters in wastewater treatment processes (WWTP) are complicated. Artificial intelligence (AI) models are generally more flexible when compared with statistical models while modeling complex datasets with nonlinearity and missing data. Studies on AI-based WWTP are increasing day by day. Therefore, it is crucial to review the AI techniques available which are implemented for WWTP. Such a review helps classifying the techniques that are invented and helps to identify challenges as well as gaps for future studies. Lastly, it can sort out the best AI technique to design predictive models for WWTPs

    A survey on artificial intelligence techniques for various wastewater treatment processes

    Get PDF
    Pollutant removal percentage is a key parameter for every WWTPs, and it is crucial to predict pollutant removal efficiency. The efficiency of pollutant removal processes can be increased with the help of modeling and its optimization. Statistical models are not practical enough for wastewater treatments due to complicated relationship among input and output parameters. AI models are generally more flexible while modeling complex datasets with missing data and nonlinearities. Many AI techniques are available, and the aim is to sort out the best AI technique to design predictive models for WWTPs. Deep Learning and Ensemble are the main techniques reviewed in this work. The Ensemble Learning models showing the most successful performance among other techniques by generally showed their accuracy and efficiency

    Determining the best weightage feature in parameterization process of GCCD model for clone detection in C-based applications

    Get PDF
    The term 'code clone' relates to code that has been replicated many times in a program. Primarily, Type-1, Type-2, Type-3, and Type-4 serve as the four distinct categories for the classification of code clones. Distinct code clone approaches and tools have been implemented for identifying code clones over the years. To overcome the limitation of generalization in recognizing all types of clones, Generic Code Clone Detection (GCCD) model is developed. The five procedures that make up the GCCD model's foundational structure are pre-processing, transformation, parameterization, categorization, and match detection. However, the preceding GCCD model can only detect all types of code clone in Java applications. In light of this limitation, the study proposes a code clone detection model based on the GCCD model, which has the capability to support other programming languages in various applications. The primary objective of this proposed research is to enhance the process in Generic Code Clone Detection (GCCD) model that can improve the code clone detection result, specifically in C-based applications. To achieve the desired objective, some enhancements in the GCCD model have been recommended which are to propose a constant and weightage for Pre-processing and Parameterization process in GCCD model. The proposed work will be tested in a case study involving four C applications. As determined by the code clone detection results from the proposed enhancement, void with its weightage is the preeminent constant and weightage for the Generic Code Clone Detection Model in C-based applications

    Enhancement of generic code clone detection model for python application

    Get PDF
    Identical code fragments in different locations are recognized as code clones. There are four native terminologies of code clones concluded as Type-1, Type-2, Type-3 and Type-4. Code clones can be identified using various approaches and models. Generic Code Clone Detection (GCCD) model was created to detect all four terminologies of code clones through five processes. A prototype has been developed to detect code clones in Java programming language that starts with Pre-processing Transformation, Parameterization, Categorization and ends with the Match Detection process. Hence, this work targeted to enhance the prototype using a GCCD model to identify all clone types in Python language. Enhancements are done in the Pre-processing process and parameterization process of the GCCD model to fit the Python language criteria. Results are improved by finding the best constant value and suitable weightage according to Python language. Proposed enhancement results of the Python language clone detection in GCCD model imply that Public as the weightage indicator and def as the best constant value

    Selection of prospective workers using profile matching algorithm on crowdsourcing platform

    Get PDF
    The use of a crowdsourcing platform is an option to get workers who will help complete the work. Crowdsourcing is the process of gathering work, information, or opinions from a large number of individuals using the internet, social media, or smartphone apps. Whether crowdsourcing is used for programming, design, content creation, or any other task, requesters are putting their trust in individuals who are unfamiliar with their knowledge and have unknown histories and skills. Requesters do not have the time or resources to screen all of the crowd's qualities, unlike employing full-time personnel. In this study, we try to minimize the risks faced by requesters when using a crowdsourcing platform to complete their work, namely by increasing the match between the profile of workers and the jobs offered on the crowdsourcing platform. The researcher implemented the profile matching method using a dataset consisting of several fields that became the criteria for finding a match. The criteria used to find a match between workers and the work offered consist of two parts, core factors and secondary factors. Core Factor Criteria as skill, designation, location, and the secondary factor is the number of years of work experience. These criteria become variables that are used in the profile matching algorithm to find workers who best match the profiles offered. This algorithm is able to select worker profiles from 10,000 datasets, up to 1148 people who are most suitable for the tasks offered. And the results obtained indicate an increase in the match between workers and the needs of the work offered by the requester
    corecore