6,088 research outputs found

    Total Recall for AJAX applications – Firefox extension

    Get PDF
    Ajax, or AJAX (Asynchronous JavaScript and XML), is a group of interrelated web development techniques used to create interactive web applications or rich Internet applications[9]. Web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of an existing web page. [9] One of the biggest problems with Ajax applications is saving state and accommodating the succession of the history controls, (Back/forward buttons). Ajax allows documents to become stateful, but when the user intuitively goes for the history controls in the browser window, things go wrong. The user expects to see the previous state of the document and is surprised to see a webpage they were on 20 minutes ago, before they arrived at the Ajax application. Our project aims to solve this problem. We have implemented an extension to the Firefox Mozilla browser that caches different states of web pages at regular intervals and displays all the different states of the page as the user navigates through the history

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Browser-based Analysis of Web Framework Applications

    Full text link
    Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    Building Robust E-learning Software Systems Using Web Technologies

    Get PDF
    Building a robust e-learning software platform represents a major challenge for both the project manager and the development team. Since functionalities of these software systems improves and grows by the day, several aspects must be taken into consideration – e.g. workflows, use-casesor alternative scenarios – in order to create a well standardized and fully functional integrated learning management system. The paper will focus on a model of implementation for an e-learning software system, analyzing its features, its functional mechanisms as well as exemplifying an implementation algorithm. A list of some of the mostly used web technologies (both server-side and client-side) will be analyzed and a discussion over major security leaks of web applicationswill also be put in discussion.E-learning, E-testing, Web Technology, Software System, Web Platform

    Proposing a secure component-based-application logic and system’s integration testing approach

    Get PDF
    Software engineering moved from traditional methods of software enterprise applications to com-ponent based development for distributed system’s applications. This new era has grown up forlast few years, with component-based methods, for design and rapid development of systems, butfact is that , deployment of all secure software features of technology into practical e-commercedistributed systems are higher rated target for intruders. Although most of research has been con-ducted on web application services that use a large share of the present software, but on the otherside Component Based Software in the middle tier ,which rapidly develops application logic, alsoopen security breaching opportunities .This research paper focus on a burning issue for researchersand scientists ,a weakest link in component based distributed system, logical attacks, that cannotbe detected with any intrusion detection system within the middle tier e-commerce distributed ap-plications. We proposed An Approach of Secure Designing application logic for distributed system,while dealing with logically vulnerability issue

    On the Change in Archivability of Websites Over Time

    Get PDF
    As web technologies evolve, web archivists work to keep up so that our digital history is preserved. Recent advances in web technologies have introduced client-side executed scripts that load data without a referential identifier or that require user interaction (e.g., content loading when the page has scrolled). These advances have made automating methods for capturing web pages more difficult. Because of the evolving schemes of publishing web pages along with the progressive capability of web preservation tools, the archivability of pages on the web has varied over time. In this paper we show that the archivability of a web page can be deduced from the type of page being archived, which aligns with that page's accessibility in respect to dynamic content. We show concrete examples of when these technologies were introduced by referencing mementos of pages that have persisted through a long evolution of available technologies. Identifying these reasons for the inability of these web pages to be archived in the past in respect to accessibility serves as a guide for ensuring that content that has longevity is published using good practice methods that make it available for preservation.Comment: 12 pages, 8 figures, Theory and Practice of Digital Libraries (TPDL) 2013, Valletta, Malt
    • 

    corecore