3 research outputs found

    Analysis of Different Websites' Cross-Browser Compatibility as a Design Issue

    Get PDF
    Websites are a crucial part of communication in the modern era of information technology. It takes a lot of work from various institutions and organizations to present comprehensive information on attractive websites. Websites serve as an online agent that enables users to complete their tasks without physically visiting the businesses. The designer of a website gives it a highly critical inspection so that users can access all of the services of the relevant institutions or organizations online. The obligation of the website designer and the relevant institutions/organizations multiplies in order to ensure websites behave similarly across all of the various browsers used by the various kinds of visitors. In order to explore cross-browser compatibility as a Design issue in several types of websites, such as job portals, government, educational, commercial, and social networking, the author of this research paper created an online tool utilizing the.NET Framework and C#. The automated tool created by the author operates in accordance with the various standards outlined in the W3C guidelines document UAAG 2.0, acts as a parser, renders the entire website's source code, and generates results based on how websites behave in the five most popular and widely used browsers, including Internet Explorer, Chrome, Safari, and Firefox. Every Browser is tried based on the five boundaries which are remembered for the parser are Blinking, Active X control, Site Resolution; picture Formats, HTML Tag mistakes. The results acquired subsequent to testing five unique classes of sites shows that instructive and long range interpersonal communication locales shows least similarity in numerous programs where as work gateways, business and government sites shows 100 percent consistence to the web composition principles suggested by W3C w.r.t. program similarity of various sites on different perusing stage

    An Empirical Approach to Evaluating Web Application Compliance across Diverse Client Platform Configurations

    No full text
    Abstract: Web applications are the most widely used class of software today. Increased diversity of web-client platform configurations causes execution of web applications to vary unpredictably, creating a myriad of challenges for quality assurance during development. This paper presents a novel technique and an inductive model that leverages empirical data from fielded systems to evaluate web application correctness across multiple client configurations. The inductive model is based on HTML tags and represents how web applications are expected to execute in each client configuration based on the fielded systems observed. End users and developers update this model by providing empirical data in the form of positive (correctly executing) and negative (incorrectly executing) instances of fielded web applications. Results of an empirical study show that the approach is useful and that popular web applications have serious client-configuration-specific flaws

    Machine Learning for Software Dependability

    Get PDF
    Dependability is an important quality of modern software but is challenging to achieve. Many software dependability techniques have been proposed to help developers improve software reliability and dependability such as defect prediction [83,96,249], bug detection [6,17, 146], program repair [51, 127, 150, 209, 261, 263], test case prioritization [152, 250], or software architecture recovery [13,42,67,111,164,240]. In this thesis, we consider how machine learning (ML) and deep learning (DL) can be used to enhanced software dependability through three examples in three different domains: automatic program repair, bug detection in electronic document readers, and software architecture recovery. In the first work, we propose a new G&V technique—CoCoNuT, which uses ensemble learning on the combination of convolutional neural networks (CNNs) and a new context-aware neural machine translation (NMT) architecture to automatically fix bugs in multiple programming languages. To better represent the context of a bug, we introduce a new context-aware NMT architecture that represents the buggy source code and its surrounding context separately. CoCoNuT uses CNNs instead of recurrent neural networks (RNNs) since CNN layers can be stacked to extract hierarchical features and better model source code at different granularity levels (e.g., statements and functions). In addition, CoCoNuTtakes advantage of the randomness in hyperparameter tuning to build multiple models that fix different bugs and combines these models using ensemble learning to fix more bugs.CoCoNuT fixes 493 bugs, including 307 bugs that are fixed by none of the 27 techniques with which we compare. In the second work, we present a study on the correctness of PDF documents and readers and propose an approach to detect and localize the source of such inconsistencies automatically. We evaluate our automatic approach on a large corpus of over 230Kdocuments using 11 popular readers and our experiments have detected 30 unique bugs in these readers and files. In the third work, we compare software architecture recovery techniques to understand their effectiveness and applicability. Specifically, we study the impact of leveraging accurate symbol dependencies on the accuracy of architecture recovery techniques. In addition, we evaluate other factors of the input dependencies such as the level of granularity and the dynamic-bindings graph construction. The results of our evaluation of nine architecture recovery techniques and their variants suggest that (1) using accurate symbol dependencies has a major influence on recovery quality, and (2) more accurate recovery techniques are needed. Our results show that some of the studied architecture recovery techniques scale to very large systems, whereas others do not
    corecore