396 research outputs found

    Reverse Engineering and Testing of Rich Internet Applications

    Get PDF
    The World Wide Web experiences a continuous and constant evolution, where new initiatives, standards, approaches and technologies are continuously proposed for developing more effective and higher quality Web applications. To satisfy the growing request of the market for Web applications, new technologies, frameworks, tools and environments that allow to develop Web and mobile applications with the least effort and in very short time have been introduced in the last years. These new technologies have made possible the dawn of a new generation of Web applications, named Rich Internet Applications (RIAs), that offer greater usability and interactivity than traditional ones. This evolution has been accompanied by some drawbacks that are mostly due to the lack of applying well-known software engineering practices and approaches. As a consequence, new research questions and challenges have emerged in the field of web and mobile applications maintenance and testing. The research activity described in this thesis has addressed some of these topics with the specific aim of proposing new and effective solutions to the problems of modelling, reverse engineering, comprehending, re-documenting and testing existing RIAs. Due to the growing relevance of mobile applications in the renewed Web scenarios, the problem of testing mobile applications developed for the Android operating system has been addressed too, in an attempt of exploring and proposing new techniques of testing automation for these type of applications

    Managing big data experiments on smartphones

    Get PDF
    The explosive number of smartphones with ever growing sensing and computing capabilities have brought a paradigm shift to many traditional domains of the computing field. Re-programming smartphones and instrumenting them for application testing and data gathering at scale is currently a tedious and time-consuming process that poses significant logistical challenges. Next generation smartphone applications are expected to be much larger-scale and complex, demanding that these undergo evaluation and testing under different real-world datasets, devices and conditions. In this paper, we present an architecture for managing such large-scale data management experiments on real smartphones. We particularly present the building blocks of our architecture that encompassed smartphone sensor data collected by the crowd and organized in our big data repository. The given datasets can then be replayed on our testbed comprising of real and simulated smartphones accessible to developers through a web-based interface. We present the applicability of our architecture through a case study that involves the evaluation of individual components that are part of a complex indoor positioning system for smartphones, coined Anyplace, which we have developed over the years. The given study shows how our architecture allows us to derive novel insights into the performance of our algorithms and applications, by simplifying the management of large-scale data on smartphones

    JSClassFinder: A Tool to Detect Class-like Structures in JavaScript

    Get PDF
    With the increasing usage of JavaScript in web applications, there is a great demand to write JavaScript code that is reliable and maintainable. To achieve these goals, classes can be emulated in the current JavaScript standard version. In this paper, we propose a reengineering tool to identify such class-like structures and to create an object-oriented model based on JavaScript source code. The tool has a parser that loads the AST (Abstract Syntax Tree) of a JavaScript application to model its structure. It is also integrated with the Moose platform to provide powerful visualization, e.g., UML diagram and Distribution Maps, and well-known metric values for software analysis. We also provide some examples with real JavaScript applications to evaluate the tool.Comment: VI Brazilian Conference on Software: Theory and Practice (Tools Track), p. 1-8, 201

    Intrusion detection and management over the world wide web

    Get PDF
    As the Internet and society become ever more integrated so the number of Internet users continues to grow. Today there are 1.6 billion Internet users. They use its services to work from home, shop for gifts, socialise with friends, research the family holiday and manage their finances. Through generating both wealth and employment the Internet and our economies have also become interwoven. The growth of the Internet has attracted hackers and organised criminals. Users are targeted for financial gain through malware and social engineering attacks. Industry has responded to the growing threat by developing a range defences: antivirus software, firewalls and intrusion detection systems are all readily available. Yet the Internet security problem continues to grow and Internet crime continues to thrive. Warnings on the latest application vulnerabilities, phishing scams and malware epidemics are announced regularly and serve to heighten user anxiety. Not only are users targeted for attack but so too are businesses, corporations, public utilities and even states. Implementing network security remains an error prone task for the modern Internet user. In response this thesis explores whether intrusion detection and management can be effectively offered as a web service to users in order to better protect them and heighten their awareness of the Internet security threat

    Hybrid DOM-Sensitive Change Impact Analysis for JavaScript

    Get PDF
    JavaScript has grown to be among the most popular programming languages. However, performing change impact analysis on JavaScript applications is challenging due to features such as the seamless interplay with the DOM, event-driven and dynamic function calls, and asynchronous client/server communication. We first perform an empirical study of change propagation, the results of which show that the DOM-related and dynamic features of JavaScript need to be taken into consideration in the analysis since they affect change impact propagation. We propose a DOM-sensitive hybrid change impact analysis technique for Javascript through a combination of static and dynamic analysis. The proposed approach incorporates a novel ranking algorithm for indicating the importance of each entity in the impact set. Our approach is implemented in a tool called Tochal. The results of our evaluation reveal that Tochal provides a more complete analysis compared to static or dynamic methods. Moreover, through an industrial controlled experiment, we find that Tochal helps developers by improving their task completion duration by 78% and accuracy by 223%

    MT-WAVE: Profiling multi-tier web applications

    Get PDF
    The web is evolving: what was once primarily used for sharing static content has now evolved into a platform for rich client-side applications. These applications do not run exclusively on the client; while the client is responsible for presentation and some processing, there is a significant amount of processing and persistence that happens server-side. This has advantages and disadvantages. The biggest advantage is that the user’s data is accessible from anywhere. It doesn’t matter which device you sign into a web application from, everything you’ve been working on is instantly accessible. The largest disadvantage is that large numbers of servers are required to support a growing user base; unlike traditional client applications, an organization making a web application needs to provision compute and storage resources for each expected user. This infrastructure is designed in tiers that are responsible for different aspects of the application, and these tiers may not even be run by the same organization. As these systems grow in complexity, it becomes progressively more challenging to identify and solve performance problems. While there are many measures of software system performance, web application users only care about response latency. This “fingertip-to-eyeball performance” is the only metric that users directly perceive: when a button is clicked in a web application, how long does it take for the desired action to complete? MT-WAVE is a system for solving fingertip-to-eyeball performance problems in web applications. The system is designed for doing multi-tier tracing: each piece of the application is instrumented, execution traces are collected, and the system merges these traces into a single coherent snapshot of system latency at every tier. To ensure that user-perceived latency is accurately captured, the tracing begins in the web browser. The application developer then uses the MT-WAVE Visualization System to explore the execution traces to first identify which system is causing the largest amount of latency, and then zooms in on the specific function calls in that tier to find optimization candidates. After fixing an identified problem, the system is used to verify that the changes had the intended effect. This optimization methodology and toolset is explained through a series of case studies that identify and solve performance problems in open-source and commercial applications. These case studies demonstrate both the utility of the MT-WAVE system and the unintuitive nature of system optimization

    Revista Economica

    Get PDF

    Reverse engineering of web applications

    Get PDF
    The MAP-i Doctoral Program of the Universities of Minho, Aveiro and PortoEven so many years after its genesis, the Internet is still growing. Not only are the users increasing, so are the number of different programming languages or frameworks for building Web applications. However, this plethora of technologies makes Web applications’ source code hard to comprehend and understand, thus deteriorating both their debugging and their maintenance costs. In this context, a number of proposals have been put forward to solve this problem. While, on one hand, there are techniques that analyze the entire source code of Web applications, the diversity of available implementation technology makes these techniques return unsatisfactory results. On the other hand, there are also techniques that dynamically (but blindly) explore the applications by running them and analyzing the results of randomly exploring them. In this case the results are better, but there is always the chance that some part of the application might be left unexplored. This thesis investigates if an hybrid approach combining static analysis and dynamic exploration of the user interface can provide better results. FREIA, a framework developed in the context of this thesis, is capable of analyzing Web applications automatically, deriving structural and behavioral interface models from them.Mesmo decorridos tantos anos desde a sua génese, a Internet continua a crescer. Este crescimento aplica-se não só ao número de utilizadores como também ao número de diferentes linguagens de programação e frameworks utilizadas para a construção de aplicações Web. No entanto, esta pletora de tecnologias leva a que o código fonte das aplicações Web seja difícil de compreender e analisar, deteriorando tanto o seu depuramento como os seus custos de manutenção. Neste contexto, foram desenvolvidas algumas propostas com intuito de resolver este problema. Não obstante, por um lado, existirem técnicas que analisam a totalidade do código fonte das aplicações Web, a diversidade das tecnologias de implementação existentes fazem com que estas técnicas gerem resultados insatisfatórios. Por outro lado, existem também técnicas que, dinamicamente (apesar de cegamente), exploram as aplicações, executando-as e analisando os resultados da sua exploração aleatória. Neste caso, os resultados são melhores, mas corremos o risco de ter deixado alguma parte da aplicação por explorar. Esta tese investiga se uma abordagem híbrida, combinando a análise estática com a exploração dinâmica da interface do utilizador consegue produzir melhores resultados. FREIA, uma framework desenvolvida no contexto desta tese é capaz de, automaticamente, analisar aplicações Web, derivando modelos estruturais e comportamentais da interface das mesmas.Esta investigação foi financiada pela Fundação para a Ciência e Tecnologia através da concessão de uma bolsa de doutoramento (SFRH/BD/71136/2010) no âmbito do Programa Operacional Potencial Humano (POPH), comparticipado pelo Fundo Social Europeu e por fundos nacionais do QREN

    Services approach & overview general tools and resources

    Get PDF
    The contents of this deliverable are split into three groups. Following an introduction, a concept and vision is sketched on how to establish the necessary natural language processing (NLP) services including the integration of existing resources. Therefore, an overview on the state-of-the-art is given, incorporating technologies developed by the consortium partners and beyond, followed by the service approach and a practical example. Second, a concept and vision on how to create interoperability for the envisioned learning tools to allow for a quick and painless integration into existing learning environment(s) is elaborated. Third, generic paradigms and guidelines for service integration are provided.The work on this publication has been sponsored by the LTfLL STREP that is funded by the European Commission's 7th Framework Programme. Contract 212578 [http://www.ltfll-project.org

    Web attack risk awareness with lessons learned from high interaction honeypots

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Com a evolução da web 2.0, a maioria das empresas elabora negócios através da Internet usando aplicações web. Estas aplicações detêm dados importantes com requisitos cruciais como confidencialidade, integridade e disponibilidade. A perda destas propriedades influencia directamente o negócio colocando-o em risco. A percepção de risco providencia o necessário conhecimento de modo a agir para a sua mitigação. Nesta tese foi concretizada uma colecção de honeypots web de alta interacção utilizando diversas aplicações e sistemas operativos para analisar o comportamento do atacante. A utilização de ambientes de virtualização assim como ferramentas de monitorização de honeypots amplamente utilizadas providencia a informação forense necessária para ajudar a comunidade de investigação no estudo do modus operandi do atacante, armazenando os últimos exploits e ferramentas maliciosas, e a desenvolver as necessárias medidas de protecção que lidam com a maioria das técnicas de ataque. Utilizando a informação detalhada de ataque obtida com os honeypots web, o comportamento do atacante é classificado entre diferentes perfis de ataque para poderem ser analisadas as medidas de mitigação de risco que lidam com as perdas de negócio. Diferentes frameworks de segurança são analisadas para avaliar os benefícios que os conceitos básicos de segurança dos honeypots podem trazer na resposta aos requisitos de cada uma e a consequente mitigação de risco.With the evolution of web 2.0, the majority of enterprises deploy their business over the Internet using web applications. These applications carry important data with crucial requirements such as confidentiality, integrity and availability. The loss of those properties influences directly the business putting it at risk. Risk awareness provides the necessary know-how on how to act to achieve its mitigation. In this thesis a collection of high interaction web honeypots is deployed using multiple applications and diverse operating systems in order to analyse the attacker behaviour. The use of virtualization environments along with widely used honeypot monitoring tools provide the necessary forensic information that helps the research community to study the modus operandi of the attacker gathering the latest exploits and malicious tools and to develop adequate safeguards that deal with the majority of attacking techniques. Using the detailed attacking information gathered with the web honeypots, the attacking behaviour will be classified across different attacking profiles to analyse the necessary risk mitigation safeguards to deal with business losses. Different security frameworks commonly used by enterprises are analysed to evaluate the benefits of the honeypots security concepts in responding to each framework’s requirements and consequently mitigating the risk
    corecore