653 research outputs found

    A document based traceability model for test management

    Get PDF
    Software testing has became more complicated in the emergence of distributed network, real-time environment, third party software enablers and the need to test system at multiple integration levels. These scenarios have created more concern over the quality of software testing. The quality of software has been deteriorating due to inefficient and ineffective testing activities. One of the main flaws is due to ineffective use of test management to manage software documentations. In documentations, it is difficult to detect and trace bugs in some related documents of which traceability is the major concern. Currently, various studies have been conducted on test management, however very few have focused on document traceability in particular to support the error propagation with respect to documentation. The objective of this thesis is to develop a new traceability model that integrates software engineering documents to support test management. The artefacts refer to requirements, design, source code, test description and test result. The proposed model managed to tackle software traceability in both forward and backward propagations by implementing multi-bidirectional pointer. This platform enabled the test manager to navigate and capture a set of related artefacts to support test management process. A new prototype was developed to facilitate observation of software traceability on all related artefacts across the entire documentation lifecycle. The proposed model was then applied to a case study of a finished software development project with a complete set of software documents called the On-Board Automobile (OBA). The proposed model was evaluated qualitatively and quantitatively using the feature analysis, precision and recall, and expert validation. The evaluation results proved that the proposed model and its prototype were justified and significant to support test management

    HasTEE: Programming Trusted Execution Environments with Haskell

    Get PDF
    Trusted Execution Environments (TEEs) are hardware-enforced memory isolation units, emerging as a pivotal security solution for security-critical applications. TEEs, like Intel SGX and ARM TrustZone, allow the isolation of confidential code and data within an untrusted host environment, such as the cloud and IoT. Despite strong security guarantees, TEE adoption has been hindered by an awkward programming model. This model requires manual application partitioning and the use of error-prone, memory-unsafe, and potentially information-leaking low-level C/C++ libraries. We address the above with \textit{HasTEE}, a domain-specific language (DSL) embedded in Haskell for programming TEE applications. HasTEE includes a port of the GHC runtime for the Intel-SGX TEE. HasTEE uses Haskell's type system to automatically partition an application and to enforce \textit{Information Flow Control} on confidential data. The DSL, being embedded in Haskell, allows for the usage of higher-order functions, monads, and a restricted set of I/O operations to write any standard Haskell application. Contrary to previous work, HasTEE is lightweight, simple, and is provided as a \emph{simple security library}; thus avoiding any GHC modifications. We show the applicability of HasTEE by implementing case studies on federated learning, an encrypted password wallet, and a differentially-private data clean room.Comment: To appear in Haskell Symposium 202

    Activity Report: Automatic Control 2012

    Get PDF

    A review of software change impact analysis

    Get PDF
    Change impact analysis is required for constantly evolving systems to support the comprehension, implementation, and evaluation of changes. A lot of research effort has been spent on this subject over the last twenty years, and many approaches were published likewise. However, there has not been an extensive attempt made to summarize and review published approaches as a base for further research in the area. Therefore, we present the results of a comprehensive investigation of software change impact analysis, which is based on a literature review and a taxonomy for impact analysis. The contribution of this review is threefold. First, approaches proposed for impact analysis are explained regarding their motivation and methodology. They are further classified according to the criteria of the taxonomy to enable the comparison and evaluation of approaches proposed in literature. We perform an evaluation of our taxonomy regarding the coverage of its classification criteria in studied literature, which is the second contribution. Last, we address and discuss yet unsolved problems, research areas, and challenges of impact analysis, which were discovered by our review to illustrate possible directions for further research

    Mutation Testing Advances: An Analysis and Survey

    Get PDF

    Cryptography for Big Data Security

    Get PDF
    As big data collection and analysis becomes prevalent in today’s computing environments there is a growing need for techniques to ensure security of the collected data. To make matters worse, due to its large volume and velocity, big data is commonly stored on distributed or shared computing resources not fully controlled by the data owner. Thus, tools are needed to ensure both the confidentiality of the stored data and the integrity of the analytics results even in untrusted environments. In this chapter, we present several cryptographic approaches for securing big data and discuss the appropriate use scenarios for each. We begin with the problem of securing big data storage. We first address the problem of secure block storage for big data allowing data owners to store and retrieve their data from an untrusted server. We present techniques that allow a data owner to both control access to their data and ensure that none of their data is modified or lost while in storage. However, in most big data applications, it is not sufficient to simply store and retrieve one’s data and a search functionality is necessary to allow one to select only the relevant data. Thus, we present several techniques for searchable encryption allowing database- style queries over encrypted data. We review the performance, functionality, and security provided by each of these schemes and describe appropriate use-cases. However, the volume of big data often makes it infeasible for an analyst to retrieve all relevant data. Instead, it is desirable to be able to perform analytics directly on the stored data without compromising the confidentiality of the data or the integrity of the computation results. We describe several recent cryptographic breakthroughs that make such processing possible for varying classes of analytics. We review the performance and security characteristics of each of these schemes and summarize how they can be used to protect big data analytics especially when deployed in a cloud setting. We hope that the exposition in this chapter will raise awareness of the latest types of tools and protections available for securing big data. We believe better understanding and closer collaboration between the data science and cryptography communities will be critical to enabling the future of big data processing

    Research summary, January 1989 - June 1990

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established at NASA ARC in June of 1983. RIACS is privately operated by the Universities Space Research Association (USRA), a consortium of 62 universities with graduate programs in the aerospace sciences, under a Cooperative Agreement with NASA. RIACS serves as the representative of the USRA universities at ARC. This document reports our activities and accomplishments for the period 1 Jan. 1989 - 30 Jun. 1990. The following topics are covered: learning systems, networked systems, and parallel systems

    Bisnesmetriikan kerääminen ja visualisointi pilvipohjaisessa kehitysympäristössä

    Get PDF
    Monitoring cloud computing resources is a straightforward and common task for any cloud application developer. The problem with current monitoring solutions is that they only focus on infrastructure resources. Many companies on the other hand would need data about the business side of their applications. This thesis extends the current monitoring solutions to capture business metrics from within applications. The metrics are then visualized to quickly allow for better analysis of the data. The tool is composed of three main components. The metrics are captured with a Node.js library that is imported in the monitored application. The library sends the captured data to InfluxDB timeseries database. The data is visualized with Grafana which implements tables, graphs, and gauges. The provided command-line tool creates a file that can be imported in Grafana to create a new dashboard with graphs in it. The requirements for the tool were created through the needs of software developers and clients of web- and mobile-developer Codemate. An architectural design was made based on the requirements and then implemented on the AWS cloud platform on top of Kubernetes. The implementation was evaluated by testing it in a real production server. The tool is functional and it works as intended. The results from the evaluation prove that the tool created in this thesis can help companies gain better information about their products. Future work includes adding the metrics capture for other languages such as Go and Ruby as well as integrating the tool to Codemate’s new development environment. Further research can be done especially in improving performance of the solution in large systems.Pilviresurssien monitorointi on selkeä ja yleinen tehtävä jokaiselle pilvipalvelun kehittäjälle. Monitorointisovellukset keskittyvät vain infrastruktuuriresursseihin, vaikka monet nykyajan yritykset tarvitsisivat tarkempaa tietoa sovellusten bisnespuolesta. Tämä diplomityö laajentaa nykyisiä monitorointisovelluksia kattamaan bisnesmetriikan keräämisen applikaatioiden sisältä sekä visualisoi datan paremman analyysin mahdollistamiseksi. Diplomityössä kehitetty työkalu koostuu kolmesta osasta. Metriikat kerätään sovelluksista Node.js-kirjaston avulla, joka lisätään sovelluksen koodiin. Kirjasto lähettää dataa InfluxDB-tietokantaan, josta se visualisoidaan Grafanalla interaktiivisten kuvaajien sekä taulukoiden avulla. Grafanaan voidaan lisäksi luoda työpöytiä diplomityötä varten luodulla ohjelmalla. Bisnesmetriikan keräämiseen ja visualisointiin luotu työkalu määriteltiin ohjelmistokehittäjä Codematen ohjelmistoinsinöörien sekä asiakkaiden tarpeiden mukaan. Määrittelyä käytettiin työkalun arkkitehtuurin luomiseen, joka ohjasi käytännön toteutusta. Työkalu rakennettiin Amazonin AWS-palveluun Kuberneteksen päälle. Toteutetun työkalun toimivuus testattiin lopuksi asiakasympäristössä tuotantopalvelimella. Työkalun todettiin toimivan tarkoituksenmukaisesti. Testauksesta saadut tulokset osoittavat, että työkalu voi auttaa yrityksiä saamaan parempaa informaatiota ohjelmistotuotteistaan sekä niiden käytöstä. Työkalun kehitystä voidaan jatkaa laajentamalla sen toimintaa Go- ja Ruby-kielille sekä integroimalla se tiiviimmin Codematen uuteen kehitysympäristöön. Lisätutkimus erityisesti suorituskyvyn parantamiseen laajoissa järjestelmissä on tarpeen

    Annual Research Report, 2010-2011

    Get PDF
    Annual report of collaborative research projects of Old Dominion University faculty and students in partnership with business, industry and government.https://digitalcommons.odu.edu/or_researchreports/1000/thumbnail.jp
    corecore