1,297 research outputs found

    SELENIUM FRAMEWORK FOR WEB AUTOMATION TESTING: A SYSTEMATIC LITERATURE REVIEW

    Get PDF
    Software Testing plays a crucial role in making high-quality products. The process of manual testing is often inaccurate, unreliable, and needed more than automation testing. One of these tools, Selenium, is an open-source framework that used along with different programming languages: (python, ruby, java, PHP, c#, etc.) to automate the test cases of web applications. The purpose of this study is to summarize the research in the area of selenium automation testing to benefit the readers in designing and delivering automated software testing with Selenium. We conducted the standard systematic literature review method employing a manual search of 2408 papers, and applying a set of inclusion/exclusion criteria the final literature included 16 papers published between 2009 and 2020. The result is using Selenium as a UI for web automation, not only all of the app functionality that has been tested, But also it can be applied with added some method or other algorithms like data mining, artificial intelligence, and machine learning. Furthermore, it can be implemented for security testing. In the future research for selenium framework automation testing, the implementation should more focus on finding effective and maintainability on the application of Selenium in other methodologies and is applied with the better improvement that can be matched for web automation testing

    Abmash: Mashing Up Legacy Web Applications by Automated Imitation of Human Actions

    Get PDF
    Many business web-based applications do not offer applications programming interfaces (APIs) to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult (for instance to synchronize data between two applications). To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy web applications by automatically imitating human interactions with them. By automatically interacting with the graphical user interface (GUI) of web applications, the system supports all forms of integrations including bi-directional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write since they deal with end-user, visual user-interface elements. The integration code is simple enough to be called a "mashup".Comment: Software: Practice and Experience (2013)

    Selenium-Based Multithreading Functional Testing

    Get PDF
    In a software development projects, testing is an activity that can spend time, effort or cost up to 35%. To reduce this, developers can choose automatic testing. Automated testing, especially for functional testing, on web applications can be done by using tools, one of which is Selenium. By default, Selenium testing is done sequentially and without exploiting multithreading, which has an impact a sufficiently long time.In this study, a platform that allows Selenium users to test and utilize multithreading with Ruby language to speed up testing was developed. Thr result shows that Ruby's multithreading has proven to be capable of speeding functional testing up on various web applications. Variations occur depending on the functionality being tested, the testing approach and also the type of browsers used

    Selection of heterogeneous test environments for the execution of automated tests

    Get PDF
    As software complexity grows so does the size of automated test suites that enable us to validate the expected behavior of the system under test. When that occurs, problems emerge for developers in the form of increased effort to manage the test process and longer execution time of test suites. Manual managing automated tests is especially problematic, as the recurring costa of guaranteeing that the automated tests (e.g.: thousands) are correctly configured to execute on the available test environments (e.g.: dozens or hundreds), on a regular basis and during the products lifetime may become huge, with unbearable human effort involved. This problem increases substantially when the system under test is one highly configurable product, requiring to be validated in heterogeneous environments, especially when these target test environments also evolve frequently (e.g.: new operating systems, new browsers, new mobile devices, ...). Being an integral part of software development, testing needs to evolve and break free from the conventional methods. This dissertation presents a technique that extends one existent algorithm to reduce the number of test executions, and extend it, enabling to perform the test case distribution over multiples heterogeneous test environments. The development, implementation and validation of the technique presented in this dissertation were conducted in the industrial context of an international software house. Real development scenarios were used to conduct experiments and validations, and the results demonstrated that the proposed technique is effective in terms of eliminating the human effort involved in test distribution.À medida que a complexidade do software aumenta o mesmo acontece com a dimensão das suites de testes automizados que permitem validar o comportamento esperado do sistema que está a ser testado. Quando isso ocorre, aparecem problemas para os programadores sob a forma de aumento de esforço necessário para gerir o processo de teste e maior tempo de execução das suites de teste. Gerir manualmente milhares de testes automatizados é especialmente problemático uma vez que os custos recorrentes de garantir que os testes automatizados (ex: milhares) estão corretamente configurados para executar nos ambientes de testes disponíveis (ex: dezenas ou centennas), durante o tempo de vida dos produtos pode tornar-se gigantesco. Este problema aumenta substancialmente quando o sistema que está a ser testado é um produto altamente configurável, precisando de ser validado em ambientes heterogéneos, especialmente quando também estes ambientes destino de testes também evoluem frequentemente (ex: novos sistemas operativos, novos browsers, novos devices móveis, ...). O tempo de execução destas suites de testes torna-se também um problema enorme, dado que não é viável executar todos as suites de testes em todas as configurações possiveis. Sendo uma parte integral do desenvolvimento de software, a forma de testar precisa de evoluir e libertar-se dos métodos convencionais. Esta dissertação apresenta uma técnica que estende um algoritmo existente que permite reduzir o número de execuções de testes, e estende-o, permitindo fazer a distribuição de casos de teste sobre múltiplos ambientes de teste heterogéneos. O desenvolvimento, implementação e validação da técnica proposta na presente dissertação foram conduzidos no contexto industrial de uma empresa internacional de desenvolvimento de software. Foram utilizados cenários de desenvolvimento de software reais para conduzir experiências e validações, e os resultados demonstraram que a técnica proposta é eficaz em termos de eliminar o esforo humano envolvido na distribuição de testes

    Automated Driver Management for Selenium WebDriver

    Get PDF
    Selenium WebDriver is a framework used to control web browsers automatically. It provides a cross-browser Application Programming Interface (API) for different languages (e.g., Java, Python, or JavaScript) that allows automatic navigation, user impersonation, and verification of web applications. Internally, Selenium WebDriver makes use of the native automation support of each browser. Hence, a platform-dependent binary file (the so-called driver) must be placed between the Selenium WebDriver script and the browser to support this native communication. The management (i.e., download, setup, and maintenance) of these drivers is cumbersome for practitioners. This paper provides a complete methodology to automate this management process. Particularly, we present WebDriverManager, the reference tool implementing this methodology. WebDriverManager provides different execution methods: as a Java dependency, as a Command-Line Interface (CLI) tool, as a server, as a Docker container, and as a Java agent. To provide empirical validation of the proposed approach, we surveyed the WebDriverManager users. The aim of this study is twofold. First, we assessed the extent to which WebDriverManager is adopted and used. Second, we evaluated the WebDriverManager API following Clarke’s usability dimensions. A total of 148 participants worldwide completed this survey in 2020. The results show a remarkable assessment of the automation capabilities and API usability of WebDriverManager by Java users, but a scarce adoption for other languages.This work has been been supported in part by the "Análisis en tiempo Real de sensores sociALes y EStimación de recursos para transporte multimodal basada en aprendizaje profundo" project (MaGIST-RALES), funded by the Spanish Agencia Estatal de Investigación (AEI, doi 10.13039/501100011033) under grant PID2019-105221RB-C44. This work also received partial support from FEDER/Ministerio de Ciencia, Innovación y Universidades - Agencia Estatal de Investigación through project Smartlet (TIN2017-85179-C3-1-R), and from the eMadrid Network, which is funded by the Madrid Regional Government (Comunidad de Madrid) with grant No. S2018/TCS-4307

    AVOIDIT IRS: An Issue Resolution System To Resolve Cyber Attacks

    Get PDF
    Cyber attacks have greatly increased over the years and the attackers have progressively improved in devising attacks against specific targets. Cyber attacks are considered a malicious activity launched against networks to gain unauthorized access causing modification, destruction, or even deletion of data. This dissertation highlights the need to assist defenders with identifying and defending against cyber attacks. In this dissertation an attack issue resolution system is developed called AVOIDIT IRS (AIRS). AVOIDIT IRS is based on the attack taxonomy AVOIDIT (Attack Vector, Operational Impact, Defense, Information Impact, and Target). Attacks are collected by AIRS and classified into their respective category using AVOIDIT.Accordingly, an organizational cyber attack ontology was developed using feedback from security professionals to improve the communication and reusability amongst cyber security stakeholders. AIRS is developed as a semi-autonomous application that extracts unstructured external and internal attack data to classify attacks in sequential form. In doing so, we designed and implemented a frequent pattern and sequential classification algorithm associated with the five classifications in AVOIDIT. The issue resolution approach uses inference to educate the defender on the plausible cyber attacks. The AIRS can work in conjunction with an intrusion detection system (IDS) to provide a heuristic to cyber security breaches within an organization. AVOIDIT provides a framework for classifying appropriate attack information, which is fundamental in devising defense strategies against such cyber attacks. The AIRS is further used as a knowledge base in a game inspired defense architecture to promote game model selection upon attack identification. Future work will incorporate honeypot attack information to improve attack identification, classification, and defense propagation.In this dissertation, 1,025 common vulnerabilities and exposures (CVEs) and over 5,000 lines of log files instances were captured in the AIRS for analysis. Security experts were consulted to create rules to extract pertinent information and algorithms to correlate identified data for notification. The AIRS was developed using the Codeigniter [74] framework to provide a seamless visualization tool for data mining regarding potential cyber attacks relative to web applications. Testing of the AVOIDIT IRS revealed a recall of 88%, precision of 93%, and a 66% correlation metric

    The perspective of students on drivers and benefits of building information modelling incorporation into quantity surveying profession in Klang Valley Malaysia

    Get PDF
    Building Information Modelling (BIM) is a very useful tool that facilitates architecture, engineering and construction (AEC) professionals and stakeholders in planning, designing and constructing the buildings through 3D models. BIM can be widened to building operations and data storage which can be accessible by owners and others. Such data help owners and stakeholders to generate results according to the information gained through BIM models. The objectives of this study were to identify the perspective of students on drivers of BIM incorporation into the quantity surveying profession and to identify the perspective of students on benefits of BIM incorporation into the quantity surveying profession. A questionnaire survey was carried out to gain the students’ perspective on drivers and benefits of BIM incorporation into the quantity surveying profession in Klang Valley, Malaysia. Specifically, this study investigated twelve drivers and fourteen benefits of BIM incorporation into the quantity surveying profession. The top three drivers were improving the capacity to provide whole-life value to the client, desire for innovation to remain competitive and strong support from university management and industry. The top three benefits were BIM provides fast, effective and efficient quantity take-off and cost estimation, time savings in the preparation of estimating cost and improved visualization for better understanding of designs for measurement and minimise omissions. For future research, it is recommended that the study be replicated at other regions so that a clearer view of this topic can be obtained. Besides, qualitative research methods could be used in identifying other drivers and benefits not covered in this study. By answering the questions in the survey form, the students were able to gain some knowledge on BIM and its importance to the quantity surveying profession. Also, it would be interesting to include industrial practitioners in this kind of study, allowing comparisons of the results between academia and industry at a later stage. Nonetheless, this study benefited the undergraduate students pursuing the Bachelor of Science (Hons) Quantity Surveying programme, universities, colleges and other institutions that offered the quantity surveying programmes at various levels and quantity surveyors working in the construction industry by exposing them to a comprehensive list of drivers and benefits of BIM incorporation into quantity surveying profession. In a way, this study helped promoted BIM and its implementation in the field of quantity surveying in Klang Valley, Malaysia

    Quality assurance for the query and distribution systems of the RCSB Protein Data Bank

    Get PDF
    The RCSB Protein Data Bank (RCSB PDB, www.pdb.org) is a key online resource for structural biology and related scientific disciplines. The website is used on average by 165 000 unique visitors per month, and more than 2000 other websites link to it. The amount and complexity of PDB data as well as the expectations on its usage are growing rapidly. Therefore, ensuring the reliability and robustness of the RCSB PDB query and distribution systems are crucially important and increasingly challenging. This article describes quality assurance for the RCSB PDB website at several distinct levels, including: (i) hardware redundancy and failover, (ii) testing protocols for weekly database updates, (iii) testing and release procedures for major software updates and (iv) miscellaneous monitoring and troubleshooting tools and practices. As such it provides suggestions for how other websites might be operated

    From Information Overload to Knowledge Graphs: An Automatic Information Process Model

    Get PDF
    Continuously increasing text data such as news, articles, and scientific papers from the Internet have caused the information overload problem. Collecting valuable information as well as coding the information efficiently from enormous amounts of unstructured textual information becomes a big challenge in the information explosion age. Although many solutions and methods have been developed to reduce information overload, such as the deduction of duplicated information, the adoption of personal information management strategies, and so on, most of the existing methods only partially solve the problem. What’s more, many existing solutions are out of date and not compatible with the rapid development of new modern technology techniques. Thus, an effective and efficient approach with new modern IT (Information Technology) techniques that can collect valuable information and extract high-quality information has become urgent and critical for many researchers in the information overload age. Based on the principles of Design Science Theory, the paper presents a novel approach to tackle information overload issues. The proposed solution is an automated information process model that employs advanced IT techniques such as web scraping, natural language processing, and knowledge graphs. The model can automatically process the full cycle of information flow, from information Search to information Collection, Information Extraction, and Information Visualization, making it a comprehensive and intelligent information process tool. The paper presents the model capability to gather critical information and convert unstructured text data into a structured data model with greater efficiency and effectiveness. In addition, the paper presents multiple use cases to validate the feasibility and practicality of the model. Furthermore, the paper also performed both quantitative and qualitative evaluation processes to assess its effectiveness. The results indicate that the proposed model significantly reduces the information overload and is valuable for both academic and real-world research

    A Resource Monitoring Scheme for Web Applications

    Get PDF
    A web application is an application that uses World Wide Web’s infrastructure to deliver its service and a web browser as a client. It is accessed by users over a network such as the Internet or an intranet. Web applications are popular due to the ubiquity of web browsers and the convenience of using a web browser as a client. They have become much more complex due to the inclusion of various other scripting technologies like JavaScript, Ajax and Cascading Style Sheets etc. HTML not only describes structural semantics of a web page through markup tags, but also enables the inclusion of external resources into web documents, such as images, scripts, style sheets, media files and other objects as parts of the web page. These are called Web resources. An efficient resource monitoring method is necessary for the development of web application, because the monitoring data helps in failure detection, load balancing, scheduling strategies and performance optimization. This is in response to the growing complexity and different development practices adopted in web application development, which makes it difficult to maintain resources efficiently. In this work, a resource monitoring scheme has been proposed and implemented to monitor the web resources such as images, scripts and style sheets. The results are produced in an interactive graph based visualization. The graph shows the size, load time and frequency of access of these resources, and dependency among the links accessing a particular resource. This representation helps in decision-making process from the organization point of view. For small to medium scale web applications, the scheme can also help in some speedup
    corecore