246,637 research outputs found

    A systematic approach to the Planck LFI end-to-end test and its application to the DPC Level 1 pipeline

    Full text link
    The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.Comment: 20 pages, 7 figures; this paper is part of the Prelaunch status LFI papers published on JINST: http://www.iop.org/EJ/journal/-page=extra.proc5/jins

    Pengukuran Usability terhadap Aplikasi Tesadaptif.Net dengan System Usability Scale

    Get PDF
    Penggunaan suatu produk perangkat lunak bermanfaat jika kualitasnya telah diterima oleh penggunanya. Besar kecilnya kualitas suatu produk perangkat lunak dapat diterima oleh pengguna, yaitu jika perangkat lunak tersebut efektif, efisien dan telah memuaskan penggunanya. Ketiga karakteritk tersebut merupakan aspek usability yang distandarisasi oleh ISO/IEC 25010 dan ISO 9421. Tulisan ini bertujuan untuk mengukur tingkat usability dari aplikasi tesadaptif.net. Pengukuran usability dalam penelitian ini menggunakan System Usability Scale (SUS) dengan menyebarkan kuesioner SUS kepada 88 responden yang terdiri dari mahasiswa jurusan Teknik Elektro dan Teknik Industri FT-UNG. Hasil akhir dari perhitungan adalah skor akhir SUS adalah 75,97. Skor ini menggambarkan bahwa aplikasi tesadaptif.net memenuhi tingkat kegunaan berdasarkan empat kategori: acceptability range, Grade Scales, Adjectives ratings dan Net Promoter Score (NPS) peringkat kata sifat, dimana untuk setiap kategori hasilnya dapat acceptable, Grade B dan Good. The use of a software product is beneficial if its quality has been accepted by its users. The size of a quality software product can be accepted by users, that is, if the software is effective, efficient and has satisfied its users. These three characteristics are usability aspects standardized by ISO/IEC 25010 and ISO 9421. This paper aims to measure the usability level of the tesadaptif.net application. Usability measurement in this study uses the System Usability Scale (SUS) by distributing the SUS questionnaire to 88 respondents consisting of students majoring in Electrical Engineering and Industrial Engineering, Faculty of Engineering, FT-UNG. The final result of the calculation is the final SUS score is 75.97. This score illustrates that the tesadaptif.net application meets the usability level based on four categories: acceptable range, grade scale, adjective rating, and Net Promoter Score (NPS) where for each category the results are acceptable, Grade B, Good and Passive

    Software quality metrics in the automatic evaluation of Python introductory programming: Métricas de qualidade de software na avaliação automática da programação introdutória Python

    Get PDF
    Numerous virtual environments with automatic program evaluation have emerged to assist the teaching-learning process, allowing timely feedback. In a review of these environments, we find few studies that focus on an approach centered on refactoring: where students are strongly encouraged to refactor, improving the submitted code to also meet quality criteria. In a traditional environment, the student submits the answer and if it is dynamically correct, he goes to the next question. In this work, we propose a complementary approach based on software engineering metrics, which allow a finer evaluation of the code where the programmer, after having his dynamically correct answer, is invited and encouraged to refactor his solution towards an optimal code that also meets the software quality metrics. The work is based on source code in the Python language and shows which software quality metrics can be used with the purpose of encouraging students to refactor their code in programming fundamentals disciplines

    Software Sustainability: The Modern Tower of Babel

    Get PDF
    <p>The aim of this paper is to explore the emerging definitions of software sustainability from the field of software engineering in order to contribute to the question, what is software sustainability?</p

    A Comparison of Reinforcement Learning Frameworks for Software Testing Tasks

    Full text link
    Software testing activities scrutinize the artifacts and the behavior of a software product to find possible defects and ensure that the product meets its expected requirements. Recently, Deep Reinforcement Learning (DRL) has been successfully employed in complex testing tasks such as game testing, regression testing, and test case prioritization to automate the process and provide continuous adaptation. Practitioners can employ DRL by implementing from scratch a DRL algorithm or using a DRL framework. DRL frameworks offer well-maintained implemented state-of-the-art DRL algorithms to facilitate and speed up the development of DRL applications. Developers have widely used these frameworks to solve problems in various domains including software testing. However, to the best of our knowledge, there is no study that empirically evaluates the effectiveness and performance of implemented algorithms in DRL frameworks. Moreover, some guidelines are lacking from the literature that would help practitioners choose one DRL framework over another. In this paper, we empirically investigate the applications of carefully selected DRL algorithms on two important software testing tasks: test case prioritization in the context of Continuous Integration (CI) and game testing. For the game testing task, we conduct experiments on a simple game and use DRL algorithms to explore the game to detect bugs. Results show that some of the selected DRL frameworks such as Tensorforce outperform recent approaches in the literature. To prioritize test cases, we run experiments on a CI environment where DRL algorithms from different frameworks are used to rank the test cases. Our results show that the performance difference between implemented algorithms in some cases is considerable, motivating further investigation.Comment: Accepted for publication at EMSE (Empirical Software Engineering journal) 202

    ICT Action Plan

    Get PDF

    Software Engineering for the Mobile Application Market

    Get PDF
    One of the goals of the current United States government is to lower healthcare costs. One of the solutions is to alter the behavior of the population to be more physically active and to eat healthier. This project will focus on the latter solution by writing applications for the Android and iOS mobile platforms that will allow a user to monitor their dietary intake to see and correct patterns in their eating behavior

    A Review of Models for Evaluating Quality in Open Source Software

    Get PDF
    Open source products/projects targeting the same or similar applications are common nowadays. This makes choosing a tricky task. Quality is one factor that can be considered when choosing among similar open source solutions. In order to measure quality in software, quality models can be used. Open source quality models emerged due to the inability of traditional quality models to measure unique features (such as community) of open source software. The aim of the paper therefore is to examine the characteristic features, unique strengths, and limitations of existing open source quality models. In addition, we compare the models based on some selected attributes
    • …
    corecore