1,848 research outputs found

    Understanding the Importance and Impact of Technology in an Accounting Setting: Work Outcomes and Relationships with Clients

    Get PDF
    This study explores how technology positively or negatively impacts the accounting profession, and specifically, the impact on work outcomes (i.e. the effectiveness and efficiency of work) and relationships with clients. Three types of technology tools were featured in this study: Accounting and Analytics, Robotic Process Automation, and Communication Technology Tools and Platforms. Our research questions were (1) How much do technology tools improve the efficiency and effectiveness of the accountant? and (2) How much do technology tools affect the relationship with clients? After surveying professionals in the accounting field, we concluded that accountants believe that Communication softwares improve their efficiency and effectiveness the most, with Accounting and Analytics softwares just behind. We can also conclude that technology has a positive, or at the very least, neutral, effect on the relationship between professionals and their clients. Overall, it was found that in the accounting field, technology has a positive impact on work outcomes and relationships with clients

    Efficiency and Automation in Threat Analysis of Software Systems

    Get PDF
    Context: Security is a growing concern in many organizations. Industries developing software systems plan for security early-on to minimize expensive code refactorings after deployment. In the design phase, teams of experts routinely analyze the system architecture and design to find potential security threats and flaws. After the system is implemented, the source code is often inspected to determine its compliance with the intended functionalities. Objective: The goal of this thesis is to improve on the performance of security design analysis techniques (in the design and implementation phases) and support practitioners with automation and tool support.Method: We conducted empirical studies for building an in-depth understanding of existing threat analysis techniques (Systematic Literature Review, controlled experiments). We also conducted empirical case studies with industrial participants to validate our attempt at improving the performance of one technique. Further, we validated our proposal for automating the inspection of security design flaws by organizing workshops with participants (under controlled conditions) and subsequent performance analysis. Finally, we relied on a series of experimental evaluations for assessing the quality of the proposed approach for automating security compliance checks. Findings: We found that the eSTRIDE approach can help focus the analysis and produce twice as many high-priority threats in the same time frame. We also found that reasoning about security in an automated fashion requires extending the existing notations with more precise security information. In a formal setting, minimal model extensions for doing so include security contracts for system nodes handling sensitive information. The formally-based analysis can to some extent provide completeness guarantees. For a graph-based detection of flaws, minimal required model extensions include data types and security solutions. In such a setting, the automated analysis can help in reducing the number of overlooked security flaws. Finally, we suggested to define a correspondence mapping between the design model elements and implemented constructs. We found that such a mapping is a key enabler for automatically checking the security compliance of the implemented system with the intended design. The key for achieving this is two-fold. First, a heuristics-based search is paramount to limit the manual effort that is required to define the mapping. Second, it is important to analyze implemented data flows and compare them to the data flows stipulated by the design

    A framework for quality assessment of ROS repositories

    Get PDF
    Robots are being increasingly used in safety-critical contexts, such as transportation and health. The need for flexible behavior in these contexts, due to human interaction factors or unstructured operating environments, led to a transition from hardware-to software-based safety mechanisms in robotic systems, whose reliability and quality is imperative to guarantee. Source code static analysis is a key component in formal software verification. It consists on inspecting code, often using automated tools, to determine a set of relevant properties that are known to influence the occurrence of defects in the final product. This paper presents HAROS, a generic, plug-in-driven, framework to evaluate code quality, through static analysis, in the context of the Robot Operating System (ROS), one of the most widely used robotic middleware. This tool (equipped with plug-ins for computing metrics and conformance to coding standards) was applied to several publicly available ROS repositories, whose results are also reported in the paper, thus providing a first overview of the internal quality of the software being developed in this community.This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project "POCI-01-0145-FEDER-006961", and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.info:eu-repo/semantics/publishedVersio

    Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification

    Get PDF
    Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system's behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system's state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.Comment: Accepted at Journal of Risk and Reliabilit

    Data-flow-based evolutionary fault localization

    Get PDF
    Fault localization is the activity of precisely indicating the faulty commands in a buggy program. It is known to be a highly costly and tedious process. Automating this process has been the goal of many studies, showing it to be a challenging problem. The coveragespectrum based approaches commonly apply heuristics grounded on the execution of control-flow components to calculate the odds of each program element to be the defective one. The present study aims to investigate another source of fault information by assessing how data-flow analysis are useful to compute suspiciousness scores; and how the combination of scores from different sources impacts fault localization. We present an approach to calculate the suspiciousness score for each program command by using the execution of data-flow components. Then we use an evolutionary algorithm to search sets of weights to combine heuristics from distinct sources of fault data (both control-flow and data-flow as well as a hybrid strategy). The approach was applied in programs with seeded faults and real faults and evaluated by using absolute metrics to asses its efficacy to locate faults. Furthermore, we introduce a new metric to investigate the dependence of tie-break strategies in building the ranking of suspicious commands. Data-flow based methods demonstrate high effectiveness but increase the need for tie-breaks, unlike the evolutionary hybrid method that keeps competitive the effectiveness and depends less on tie-break strategies

    Towards quality programming in the automated testing of distributed applications

    Get PDF
    PhD ThesisSoftware testing is a very time-consuming and tedious activity and accounts for over 25% of the cost of software development. In addition to its high cost, manual testing is unpopular and often inconsistently executed. Software Testing Environments (STEs) overcome the deficiencies of manual testing through automating the test process and integrating testing tools to support a wide range of test capabilities. Most prior work on testing is in single-thread applications. This thesis is a contribution to testing of distributed applications, which has not been well explored. To address two crucial issues in testing, when to stop testing and how good the software is after testing, a statistics-based integrated test environment which is an extension of the testing concept in Quality Programming for distributed applications is presented. It provides automatic support for test execution by the Test Driver, test development by the SMAD Tree Editor and the Test Data Generator, test failure analysis by the Test Results Validator and the Test Paths Tracer, test measurement by the Quality Analyst, test management by the Test Manager and test planning by the Modeller. These tools are integrated around a public, shared data model describing the data entities and relationships which are manipulable by these tools. It enables early entry of the test process into the life cycle due to the definition of the quality planning and message-flow routings in the modelling. After well-prepared modelling and requirements specification are undertaken, the test process and the software design and implementation can proceed concurrently. A simple banking application written using Java Remote Method Invocation (RMI) and Java DataBase Connectivity (JDBC) shows the testing process of fitting it into the integrated test environment. The concept of the automated test execution through mobile agents across multiple platforms is also illustrated on this 3-tier client/server application.The National Science Council, Taiwan: The Ministry of National Defense, Taiwan

    Anomaly Detection Through Container Testing: A Survey of Company Practices

    Get PDF
    Preprint of the conference paper: Anomaly Detection Through Container Testing: A Survey of Company Practices
    • …
    corecore