3,276 research outputs found

    HTML5-pohjaisten hybridimobiilisovellusten käyttöliittymätestauksen automatisoinnin hyödyllisyys: Tapaustutkimus

    Get PDF
    While the first research papers on GUI test automation date back to the 1990s, its use in the software industry is still relatively rare. Traditionally, GUIs have been tested manually only and software automation has focused on the lower levels of testing such as unit testing or integration testing. The main reason for that is the complexity of GUIs compared to the lower software components. However, the manual testing of GUIs is a tedious and time-consuming process that involves repetitive and dull tasks, since the same tests need to be executed repeatedly on every testing iteration of the software under testing. The main goal with GUI test automation is to automate those steps and by doing so improve the cost-efficiency of the testing process and free the testers’ time on more meaningful and useful tasks. The previous research on GUI test automation reveals contradicting results. Some of the research has found GUI test automation to be both beneficial and cost-efficient and while others have found results suggesting the exact opposite. The contradicting results from previous research and the unclarity on the benefits, challenges, limitations and impediments of GUI test automation worked as the main driver for this thesis. The research was conducted as a combination of a literature review on the subject and a case study of three HTML5-based hybrid mobile application projects in the mobile development unit of one of the biggest IT companies in Finland.Käyttöliittymätestiautomaation käyttö yrityksissä on verrattain harvinaista, vaikka ensimmäiset tutkimukset aiheesta ovat 1990-luvulta. Perinteisesti käyttöliittymiä on testattu manuaalisesti ja ohjelmistotestausautomaatio keskitetty ohjelmiston alempien tasojen testaamiseen yksikkö- ja integraatiotesteillä. Pääsyy tähän on käyttöliittymien monimutkaisuus verrattuna alemman tason ohjelmistokomponentteihin. Käyttöliittymän testaaminen manuaalisesti on kuitenkin vaivalloinen, aikaa vaativa ja toisteinen prosessi, koska samat testit suoritetaan jokaisella testiajolla. Käyttöliittymätestauksen automatisoinnin päätavoite on testauksen kustannus-tehokkuuden parantaminen ja testaajien ajan vapauttaminen olennaisempiin tehtäviin. Aikaisemmat tutkimustulokset käyttöliittymätestiautomaatioon liittyen ovat ristiriitaisia. Osa tutkimuksista on todennut käyttöliittymätestiautomaation olevan hyödyllistä ja kustannustehokasta ja osa on päätynyt päinvastaisiin tuloksiin. Tämän työn päämotivaattoreina toimivat aiemman tutkimuksen ristiriitaiset tulokset ja epäselvyys käyttöliittymätestiautomaation hyödyllisyydestä ja kustannustehokkuudesta. Työn päätavoitteena oli tutkia voiko käyttöliittymätestiautomaation käyttö olla hyödyllistä ja kustannustehokasta. Työ koostuu kirjallisuuskatsauksesta ja kolmen HTML5-pohjaisen hybridimobiilisovelluksen tapaustutkimuksesta testiautomaation hyödyllisyyteen ja kustannustehokkuuteen liittyen

    Analysis and modelling of cost of quality in aircraft tailplane assembly

    Get PDF
    With production quality playing a more and more important role in keeping the competitive power of company, Cost of Quality (CoQ) are paid more and more attention in manufacturing industries. Especially in aircraft manufacturing industry, due to the more stringent requirements on quality, the CoQ has been a serious issue for manager. However, due to the specificity of the industry, such as high-tech, low-volume, low degree of automation, the traditional generic CoQ models are not applied directly which make most of the aircraft manufacturing companies are lack of systematic method and efficient tool to analysis and manage CoQ. it is essential to develop a CoQ model which can be used to analyse and estimate the CoQ in the aircraft manufacturing industry. This research aims at developing a CoQ model for tailplane assembly which can help the quality manager to collect and store the quality issue and cost information, and estimate the CoQ and analyse the benefit of cost spent on quality. The CoQ elements are identified and defined based on the comparing results of the literature and actual operation data. Prevention-Appraisal-Failure (P-A-F)/ Activity-Based-Costing (ABC) system is applied to develop the CoQ estimation system. And Cost-Benefit-Analysis (CBA) is applied to analyse the benefit brought by the cost spend on quality. In order to collect enough professional data for the model, an industry survey is designed. Moreover, some GUIs are designed using VBA in MS Excel to improve the operability and practicability. Furthermore, two different cases and expert judgements are used to validate the developed CoQ model. The validation result illustrates that the developed model can help the user to estimate and analyse the CoQ in tailplane assembly, and supply a method to analyse quality issues quantitatively. And the overall performance of the model is approved by the experts in aircraft industry. The model is suit for aircraft industry and worth popularizing in this field

    BIM Assisted Design Process Automation for Pre-Engineered Buildings (PEB)

    Get PDF
    The effective adoption and implementation of Building Information Modeling (BIM) is still challenging for the construction industry. However, studies and reports show a significant increase in the rate of BIM implementation and adoption in mainstream construction activities over the last five years. In contrast, Pre-Engineered Building (PEB) construction, a specialized construction system which provides a very efficient approach for construction of primarily industrial buildings, has not seen the same uptake in BIM implementation and adoption. The thesis reviews the benefits and the main applications of BIM for the PEB industry as well as challenges of its practical implementation. To facilitate the implementation of BIM in the PEB industry, a BIM framework is adapted from Pre-fabrication (Pre-fab) industry and new workflows, process maps, and data-exchange strategies are developed. As the PEB industry traditionally makes significant use of automation in its design and fabrication process, accordingly this work investigates the technical challenges of incorporating automation into the proposed BIM process. Two new BIM concepts, “Planar Concept” and “Floating LOD”, are then developed and implemented as a solution to these challenges. To define the proper input/output criteria for automated BIM design processes, a numerical study was performed to identify an “Optimum LOD”. A software implementation embodying the research outcomes was developed to illustrate the feasibility of the results. Its step-by-step deployment is analyzed and discussed using an example industry PEB design project. Further, the impact of this work is extended by integrating the developed BIM framework and automated design process with wind engineering design activities and tools and procurement systems. The study concludes that the deployment of the proposed BIM framework could significantly address existing issues in project design through to operation processes found in the PEB industry. Also, the results indicate the developed concepts have the potential for supporting the application of automation in the other sectors of the general construction industry. This thesis is written using the Integrated Article format and includes various complementary studies

    Agile Testing: Improving the Process : Case Descom

    Get PDF
    The thesis was assigned by Descom, a marketing and technology company based in Jyväskylä. The aim of the thesis was to research the current state of testing inside the organization, and to improve on the existing processes and practices. The thesis was carried out as a design research (applied action research), because the focus was improving already existing processes inside a company. The theory base contains a wide range of subjects from agile development models, the testing process, and process improvement models to agile testing. Without a solid base of multiple aspects it would have been impossible to understand how the testing works as a process and how it could have been improved. As Descom uses agile development it was necessary to follow the same principles throughout the writing of the thesis and on results. As a result information was provided for the company about the current state of testing procedures at Descom and how to improve the testing and processes in the future. The documentation already existing for testing such as the test plan and test report were updated. New documents such as a process improvement plan based on Critical Testing Processes, test strategy and testing policy were also created. Figures of the testing process, and the processes for all test types in use were created to be used as a visual aid for understanding the testing as whole at Descom.Opinnäytetyön toimeksianto tuli Descomilta, joka on Jyväskylästä lähtöisin oleva markkinointi ja teknologia yritys. Työn tavoitteena oli tutkia testauksen tilaa organisaatiossa ja kehittää olemassa olevia prosesseja ja käytäntöjä. Tutkimusmenetelmäksi valikoitui kehittämistutkimus, koska painotus oli olemassa olevien prosessien kehityksessä yrityksen sisällä. Teoriapohjassa käsiteltiin monia aiheita ketterästä sovelluskehityksestä, testausprosessista ja prosessi kehityksestä aina ketterään testaukseen asti. Ilman kattavaa pohjaa monille osa-alueille, olisi ollut mahdotonta ymmärtää miten testaus toimii prosessina ja miten sitä pystyy kehittämään. Descom toimii ketterän sovelluskehityksen mukaisesti projekteissaan, joten oli tärkeää seurata samoja ketteriä periaatteita läpi opinnäytetyön kirjoittamisen ja tuloksissa. Tuloksena saatiin tietoa yritykselle, siitä miten testaus on toiminut Descomilla ja kuinka testausta ja prosesseja tulisi kehittää tulevaisuudessa. Myös aiemmin olemassa olleet testausdokumentit päivitettiin. Uusina dokumentteina laadittiin suunnitelma prosessikehitykseen, joka perustui Critical Testing Processes –malliin, testausstrategia ja testauspolitiikka. Prosessikuvaus tehtiin kaavioita käyttäen, joilla kuvattiin prosessi kokonaisuutena sekä käytettävät testaustasot

    Similarity-based Web Element Localization for Robust Test Automation

    Get PDF
    Non-robust (fragile) test execution is a commonly reported challenge in GUI-based test automation, despite much research and several proposed solutions. A test script needs to be resilient to (minor) changes in the tested application but, at the same time, fail when detecting potential issues that require investigation. Test script fragility is a multi-faceted problem. However, one crucial challenge is how to reliably identify and locate the correct target web elements when the website evolves between releases or otherwise fail and report an issue. This article proposes and evaluates a novel approach called similarity-based web element localization (Similo), which leverages information from multiple web element locator parameters to identify a target element using a weighted similarity score. This experimental study compares Similo to a baseline approach for web element localization. To get an extensive empirical basis, we target 48 of the most popular websites on the Internet in our evaluation. Robustness is considered by counting the number of web elements found in a recent website version compared to how many of these existed in an older version. Results of the experiment show that Similo outperforms the baseline; it failed to locate the correct target web element in 91 out of 801 considered cases (i.e., 11%) compared to 214 failed cases (i.e., 27%) for the baseline approach. The time efficiency of Similo was also considered, where the average time to locate a web element was determined to be 4 milliseconds. However, since the cost of web interactions (e.g., a click) is typically on the order of hundreds of milliseconds, the additional computational demands of Similo can be considered negligible. This study presents evidence that quantifying the similarity between multiple attributes of web elements when trying to locate them, as in our proposed Similo approach, is beneficial. With acceptable efficiency, Similo gives significantly higher effectiveness (i.e., robustness) than the baseline web element localization approach

    Powertrain Assembly Lines Automatic Configuration Using a Knowledge Based Engineering Approach

    Get PDF
    Technical knowledge and experience are intangible assets crucial for competitiveness. Knowledge is particularly important when it comes to complex design activities such as the configuration of manufacturing systems. The preliminary design of manufacturing systems relies significantly on experience of designers and engineers, lessons learned and complex sets of rules and is subject to a huge variability of inputs and outputs and involves decisions which must satisfy many competing requirements. This complicated design process is associated with high costs, long lead times and high probability of risks and reworks. It is estimated that around 20% of the designer’s time is dedicated to searching and analyzing past available knowledge, while 40% of the information required for design is identified through personally stored information. At a company level, the design of a new production line does not start from scratch. Based on the basic requirements of the customers, engineers use their own knowledge and try to recall past layout ideas searching for production line designs stored locally in their CAD systems [1]. A lot of knowledge is already stored, and has been used for a long time and evolved over time. There is a need to retrieve this knowledge and integrate it into a common and reachable framework. Knowledge Based Engineering (KBE) and knowledge representation techniques are considered to be a successful way to tackle this design problem at an industrial level. KBE is, in fact, a research field that studies methodologies and technologies for capturing and re-using product and process engineering knowledge to achieve automation of repetitive design tasks [2]. This study presents a methodology to support the configuration of powertrain assembly lines, reducing design times by introducing a best practice for production systems provider companies. The methodology is developed in a real industrial environment, within Comau S.p.A., introducing the role of a knowledge engineer. The approach includes extraction of existing technical knowledge and implementation in a knowledge-based software framework. The macro system design requirements (e.g. cycle time, production mix, etc.) are taken as input. A user driven procedure guides the designer in the definition of the macro layout-related decisions and in the selection of the equipment to be allocated within the project. The framework is then integrated with other software tools allowing the first phase design of the line including a technical description and a 2D and 3D CAD line layout. The KBE application is developed and tested on a specific powertrain assembly case study. Finally, a first validation among design engineers is presented, comparing traditional and new approach and estimating a cost-benefit analysis useful for future possible KBE implementations

    Computational Testing for Automated Preprocessing : a Matlab toolbox to enable large scale electroencephalography data processing

    Get PDF
    Electroencephalography (EEG) is a rich source of information regarding brain function. However, the preprocessing of EEG data can be quite complicated, due to several factors. For example, the distinction between true neural sources and noise is indeterminate; EEG data can also be very large. The various factors create a large number of subjective decisions with consequent risk of compound error. Existing tools present the experimenter with a large choice of analysis methods. Yet it remains a challenge for the researcher to integrate methods for batch-processing of the average large datasets, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g. the classification of artefacts in channels, epochs or segments. This introduces extra subjectivity, is slow and is not reproducible. Batching and well-designed automation can help to regularise EEG preprocessing, and thus reduce human effort, subjectivity and consequent error. We present the computational testing for automated preprocessing (CTAP) toolbox, to facilitate: (i) batch-processing that is easy for experts and novices alike; (ii) testing and manual comparison of preprocessing methods. CTAP extends the existing data structure and functions from the well-known EEGLAB toolbox, based on Matlab and produces extensive quality control outputs. CTAP is available under MIT licence from https://github.com/bwrc/ctap.Peer reviewe

    Improving Automated Software Testing while re-engineering legacy systems in the absence of documentation

    Get PDF
    Legacy software systems are essential assets that contain an organizations' valuable business logic. Because of outdated technologies and methods used in these systems, they are challenging to maintain and expand. Therefore, organizations need to decide whether to redevelop or re-engineer the legacy system. Although in most cases, re-engineering is the safer and less expensive choice, it has risks such as failure to meet the expected quality and delays due to testing blockades. These risks are even more severe when the legacy system does not have adequate documentation. A comprehensive testing strategy, which includes automated tests and reliable test cases, can substantially reduce the risks. To mitigate the hazards associated with re-engineering, we have conducted three studies in this thesis to improve the testing process. Our rst study introduces a new testing model for the re-engineering process and investigates test automation solutions to detect defects in the early re-engineering stages. We implemented this model on the Cold Region Hydrological Model (CRHM) application and discovered bugs that would not likely have been found manually. Although this approach helped us discover great numbers of software defects, designing test cases is very time-consuming due to the lack of documentation, especially for large systems. Therefore, in our second study, we investigated an approach to generate test cases from user footprints automatically. To do this, we extended an existing tool to collect user actions and legacy system reactions, including database and le system changes. Then we analyzed the data based on the order of user actions and time of them and generated human-readable test cases. Our evaluation shows that this approach can detect more bugs than other existing tools. Moreover, the test cases generated using this approach contain detailed oracles that make them suitable for both black-box and white-box testing. Many scienti c legacy systems such as CRHM are data-driven; they take large amounts of data as input and produce massive data after applying mathematical models. Applying test cases and nding bugs is more demanding when we are dealing with large amounts of data. Hence in our third study, we created a comparative visualization tool (ComVis) to compare a legacy system's output after each change. Visualization helps testers to nd data issues resulting from newly introduced bugs. Twenty participants took part in a user study in which they were asked to nd data issued using ComVis and embedded CRHM visualization tool. Our user study shows that ComVis can nd 51% more data issues than embedded visualization tools in the legacy system can. Also, results from the NASA-TLX assessment and thematic analysis of open-ended questions about each task show users prefer to use ComVis over the built-in visualization tool. We believe our introduced approaches and developed systems will signi cantly reduce the risks associated with the re-engineering process. i
    corecore