3,821 research outputs found

    Browser-based Analysis of Web Framework Applications

    Full text link
    Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    Building Robust E-learning Software Systems Using Web Technologies

    Get PDF
    Building a robust e-learning software platform represents a major challenge for both the project manager and the development team. Since functionalities of these software systems improves and grows by the day, several aspects must be taken into consideration – e.g. workflows, use-casesor alternative scenarios – in order to create a well standardized and fully functional integrated learning management system. The paper will focus on a model of implementation for an e-learning software system, analyzing its features, its functional mechanisms as well as exemplifying an implementation algorithm. A list of some of the mostly used web technologies (both server-side and client-side) will be analyzed and a discussion over major security leaks of web applicationswill also be put in discussion.E-learning, E-testing, Web Technology, Software System, Web Platform

    A Practical T-P3R2 Model to Test Dynamic Websites

    Get PDF
    Present day web applications are very complex as they employ more objects (controls) on a web page than traditional web applications. This results in more memory leaks, more CPU utilizations and longer test executions. Furthermore, today websites are dynamic meaning that the web pages are loaded according to the users input. Higher complexity of web software means more insecure website. This increases the attack surfaces. In this paper, it is proposed to use both Test-Driven Development (TDD) and white-box testing together to handle the dynamic aspects of web applications. Also, it proposes a new practical T-P3 R2 model to cope with dynamism of websites. Keywords: Dynamic website testing, TDD, Web Application Trees (WAT), Path testing

    Verifying Web Applications: From Business Level Specifications to Automated Model-Based Testing

    Full text link
    One of reasons preventing a wider uptake of model-based testing in the industry is the difficulty which is encountered by developers when trying to think in terms of properties rather than linear specifications. A disparity has traditionally been perceived between the language spoken by customers who specify the system and the language required to construct models of that system. The dynamic nature of the specifications for commercial systems further aggravates this problem in that models would need to be rechecked after every specification change. In this paper, we propose an approach for converting specifications written in the commonly-used quasi-natural language Gherkin into models for use with a model-based testing tool. We have instantiated this approach using QuickCheck and demonstrate its applicability via a case study on the eHealth system, the national health portal for Maltese residents.Comment: In Proceedings MBT 2014, arXiv:1403.704

    Statically Checking Web API Requests in JavaScript

    Full text link
    Many JavaScript applications perform HTTP requests to web APIs, relying on the request URL, HTTP method, and request data to be constructed correctly by string operations. Traditional compile-time error checking, such as calling a non-existent method in Java, are not available for checking whether such requests comply with the requirements of a web API. In this paper, we propose an approach to statically check web API requests in JavaScript. Our approach first extracts a request's URL string, HTTP method, and the corresponding request data using an inter-procedural string analysis, and then checks whether the request conforms to given web API specifications. We evaluated our approach by checking whether web API requests in JavaScript files mined from GitHub are consistent or inconsistent with publicly available API specifications. From the 6575 requests in scope, our approach determined whether the request's URL and HTTP method was consistent or inconsistent with web API specifications with a precision of 96.0%. Our approach also correctly determined whether extracted request data was consistent or inconsistent with the data requirements with a precision of 87.9% for payload data and 99.9% for query data. In a systematic analysis of the inconsistent cases, we found that many of them were due to errors in the client code. The here proposed checker can be integrated with code editors or with continuous integration tools to warn programmers about code containing potentially erroneous requests.Comment: International Conference on Software Engineering, 201

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Ten Years of Rich Internet Applications: A Systematic Mapping Study, and Beyond

    Get PDF
    BACKGROUND: The term Rich Internet Applications (RIAs) is generally associated with Web appli- cations that provide the features and functionality of traditional desktop applications. Ten years after the introduction of the term, an ample amount of research has been carried out to study various aspects of RIAs. It has thus become essential to summarize this research and provide an adequate overview. OBJECTIVE: The objective of our study is to assemble, classify and analyze all RIA research performed in the scienti c community, thus providing a consolidated overview thereof, and to identify well-established topics, trends and open research issues. Additionally, we provide a qualitative discussion of the most inter- esting ndings. This work therefore serves as a reference work for beginning and established RIA researchers alike, as well as for industrial actors that need an introduction in the eld, or seek pointers to (a speci c subset of) the state-of-the-art. METHOD: A systematic mapping study is performed in order to identify all RIA-related publications, de ne a classi cation scheme, and categorize, analyze, and discuss the identi ed research according to it. RESULTS: Our source identi cation phase resulted in 133 relevant, peer-reviewed publications, published between 2002 and 2011 in a wide variety of venues. They were subsequently classi ed according to four facets: development activity, research topic, contribution type and research type. Pie, stacked bar and bubble charts were used to visualize and analyze the results. A deeper analysis is provided for the most interesting and/or remarkable results. CONCLUSION: Analysis of the results shows that, although the RIA term was coined in 2002, the rst RIA-related research appeared in 2004. From 2007 there was a signi cant increase in research activity, peaking in 2009 and decreasing to pre-2009 levels afterwards. All development phases are covered in the identi ed research, with emphasis on \design" (33%) and \implementation" (29%). The majority of research proposes a \method" (44%), followed by \model" (22%), \methodology" (18%) and \tools" (16%); no publications in the category \metrics" were found. The preponderant research topic is \models, methods and methodologies" (23%) and to a lesser extent, \usability & accessibility" and \user interface" (11% each). On the other hand, the topic \localization, internationalization & multi-linguality" received no attention at all, and topics such as \deep web" (under 1%), \business processing", \usage analysis", \data management", \quality & metrics", (all under 2%), \semantics" and \performance" (slightly above 2%) received very few attention. Finally, there is a large majority of \solution proposals" (66%), few \evaluation research" (14%) and even fewer \validation" (6%), although the latter are increasing in recent years
    • 

    corecore