1,862 research outputs found

    Introducing Energy Efficiency into SQALE

    Get PDF
    Energy Efficiency is becoming a key factor in software development, given the sharp growth of IT systems and their impact on worldwide energy consumption. We do believe that a quality process infrastructure should be able to consider the Energy Efficiency of a system since its early development: for this reason we propose to introduce Energy Efficiency into the existing quality models. We selected the SQALE model and we tailored it inserting Energy Efficiency as a sub-characteristic of efficiency. We also propose a set of six source code specific requirements for the Java language starting from guidelines currently suggested in the literature. We experienced two major challenges: the identification of measurable, automatically detectable requirements, and the lack of empirical validation on the guidelines currently present in the literature and in the industrial state of the practice as well. We describe an experiment plan to validate the six requirements and evaluate the impact of their violation on Energy Efficiency, which has been partially proved by preliminary results on C code. Having Energy Efficiency in a quality model and well verified code requirements to measure it, will enable a quality process that precisely assesses and monitors the impact of software on energy consumptio

    Comparative Analysis of Fullstack Development Technologies: Frontend, Backend and Database

    Get PDF
    Accessing websites with various devices has brought changes in the field of application development. The choice of cross-platform, reusable frameworks is very crucial in this era. This thesis embarks in the evaluation of front-end, back-end, and database technologies to address the status quo. Study-a explores front-end development, focusing on angular.js and react.js. Using these frameworks, comparative web applications were created and evaluated locally. Important insights were obtained through benchmark tests, lighthouse metrics, and architectural evaluations. React.js proves to be a performance leader in spite of the possible influence of a virtual machine, opening the door for additional research. Study b delves into backend scripting by contrasting node.js with php. The efficiency of sorting algorithms—binary, bubble, quick, and heap—is the main subject of the research. The performance measurement tool is apache jmeter, and the most important indicator is latency. Study c sheds light on database systems by comparing and contrasting the performance of nosql and sql, with a particular emphasis on mongodb for nosql. In a time of enormous data volumes, reliable technologies are necessary for data management. The five basic database activities that apache jmeter examines are insert, select, update, delete, and aggregate. The performance indicator is the amount of time that has passed. The results showed that the elapsed time for insert operations was significantly faster in nosql than in sql. The p-value for each operation result was less than 0.05, indicating that the performance difference is not significant. The results also showed that the elapsed time of update, delete, select, and aggregate operations are less in nosql than in sql. This suggests that the performance difference between sql and nosql is not significant. These research studies are combined in this thesis to provide a comprehensive understanding of database management, backend programming, and development frameworks. The results provide developers and organisations with the information they need to make wise decisions in this constantly changing environment and satisfy the expectations of a dynamic and diverse technology landscape. INDEX WORDS: Framework, JavaScript, frontend, React.js, Angular.js, Node.js, PHP, Backend, technology, Algorithms, Performance, Apache JMeter, T-test, SQL, NoSQL, Database management systems, Performance comparison, Data operations, Decision-making

    Mining Knowledge Bases for Question & Answers Websites

    Get PDF
    We studied the problem of searching answers for questions on a Question-and-Answer Website from knowledge bases. A number of research efforts had been developed using Stack Overflow data, which is available for the public. Surprisingly, only a few papers tried to improve the search for better answers. Furthermore, current approaches for searching a Question-and-Answer Website are usually limited to the question database, which is usually the website own content. We showed it is feasible to use knowledge bases as sources for answers. We implemented both vector-space and topic-space representations for our datasets and compared these distinct techniques. Finally, we proposed a hybrid ranking approach that took advantage of a machine-learned classifier to incorporate the tag information into the ranking and showed that it was able to improve the retrieval performance

    Modelling of Information Flow and Resource Utilization in the EDGE Distributed Web System

    Get PDF
    The adoption of Distributed Web Systems (DWS) into modern engineering design process has dramatically increased in recent years. The Engineering Design Guide and Environment (EDGE) is one such DWS, intended to provide an integrated set of tools for use in the development of new products and services. Previous attempts to improve the efficiency and scalability of DWS focused largely on hardware utilization (i.e. multithreading and virtualization) and software scalability (i.e. load balancing and cloud services). However, these techniques are often limited to analysis of the computational complexity of the algorithms implemented. This work seeks to improve the understanding of efficiency and scalability of DWS by modelling the dynamics of information flow and resource utilization by characterizing DWS workloads through historical usage data (i.e. request type, frequency, access time). The design and implementation of EDGE is described. A DWS model of an EDGE system is developed and validated against theoretical limiting cases. The DWS model is used to predict the throughput of an EDGE system given a resource allocation and workflow. Results of the simulation suggest that proposed DWS designs can be evaluated according to the usage requirements of an engineering firm, ultimately guiding an informed decision for the selection and deployment of a DWS in an enterprise environment. Recommendations for future work related to the continued development of EDGE, DWS modelling of EDGE installation environments, and the extension of DWS modelling to new product development processes are presented

    Citizen Science for Citizen Access to Law

    Get PDF
    This papers sits at the intersection of citizen access to law, legal informatics and plain language. The paper reports the results of a joint project of the Cornell University Legal Information Institute and the Australian National University which collected thousands of crowdsourced assessments of the readability of law through the Cornell LII site. The aim of the project is to enhance accuracy in the prediction of the readability of legal sentences. The study requested readers on legislative pages of the LII site to rate passages from the United States Code and the Code of Federal Regulations and other texts for readability and other characteristics. The research provides insight into who uses legal rules and how they do so. The study enables conclusions to be drawn as to the current readability of law and spread of readability among legal rules. The research is intended to enable the creation of a dataset of legal rules labelled by human judges as to readability. Such a dataset, in combination with machine learning, will assist in identifying factors in legal language which impede readability and access for citizens. As far as we are aware, this research is the largest ever study of readability and usability of legal language and the first research which has applied crowdsourcing to such an investigation. The research is an example of the possibilities open for enhancing access to law through engagement of end users in the online legal publishing environment for enhancement of legal accessibility and through collaboration between legal publishers and researchers

    Method-Level Bug Severity Prediction using Source Code Metrics and LLMs

    Full text link
    In the past couple of decades, significant research efforts are devoted to the prediction of software bugs. However, most existing work in this domain treats all bugs the same, which is not the case in practice. It is important for a defect prediction method to estimate the severity of the identified bugs so that the higher-severity ones get immediate attention. In this study, we investigate source code metrics, source code representation using large language models (LLMs), and their combination in predicting bug severity labels of two prominent datasets. We leverage several source metrics at method-level granularity to train eight different machine-learning models. Our results suggest that Decision Tree and Random Forest models outperform other models regarding our several evaluation metrics. We then use the pre-trained CodeBERT LLM to study the source code representations' effectiveness in predicting bug severity. CodeBERT finetuning improves the bug severity prediction results significantly in the range of 29%-140% for several evaluation metrics, compared to the best classic prediction model on source code metric. Finally, we integrate source code metrics into CodeBERT as an additional input, using our two proposed architectures, which both enhance the CodeBERT model effectiveness
    corecore