1,339 research outputs found

    Dimensional Analysis of Robot Software without Developer Annotations

    Get PDF
    Robot software risks the hazard of dimensional inconsistencies. These inconsistencies occur when a program incorrectly manipulates values representing real-world quantities. Incorrect manipulation has real-world consequences that range in severity from benign to catastrophic. Previous approaches detect dimensional inconsistencies in programs but require extra developer effort and technical complications. The extra effort involves developers creating type annotations for every variable representing a real-world quantity that has physical units, and the technical complications include toolchain burdens like specialized compilers or type libraries. To overcome the limitations of previous approaches, this thesis presents novel methods to detect dimensional inconsistencies without developer annotations. We start by empirically assessing the difficulty developers have in making type annotations. In a human study of 83 subjects, we find that developers are only 51% accurate and require more than 2 minutes per annotation. We further find that type suggestions have a significant impact on annotation accuracy. We find that when showing developers annotation suggestions, three suggestions are better than a single suggestion because they are as helpful when correct and less harmful when incorrect. Since developers struggle to make type annotations accurately, we present a novel method to infer physical unit types without developer annotations. This is novel because it is the first method to detect dimensional inconsistencies in ROS C++ without developer annotations, and this is important because robot software and ROS are increasingly used in real-world applications. Our method leverages a property of robotic middleware architecture that reuses standardized data structures, and we implement our method in an open-source tool, Phriky. We evaluate our method empirically on a corpus of 5.9 M lines of code and find that it detects real inconsistencies with an 87% TP rate. However, our method only assigns physical unit types to 25% of variables, leaving much of the annotation space unaddressed. To overcome these limitations, we extend our method to utilize uncertain evidence in identifiers using probabilistic reasoning. We implement our new probabilistic method in a tool Phys and find that it assigns units to 75% of variables while retaining a TP rate of 82%. We present the first open dataset of dimensional inconsistencies in open-source robotics code, to our knowledge. Lastly, we identify extensions to our work and next steps for software tool developers to build more powerful robot software development tools. Advisers: Sebastian Elbaum and Carrick Detweile

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Managing Quantities and Units of Measurement in Code Bases

    Get PDF
    Quantities in engineering and the physical sciences are expressed as units of measurement (UoM). If a software system fails to maintain the algebraic attributes of a system’s UoM information correctly when evaluating expressions then disastrous problems can arise. However, it is perhaps the more mundane unit mismatches and lack of interoperability that over time incurs a greater cost. Global and existential challenges, from infectious diseases to environmental breakdown, require high-quality data. Ensuring software systems support quantities explicitly is becoming less of a luxury and more of a necessity. While there are technical solutions that allow units of measurement to be specified at both the model and code level, a detailed assessment of their strengths and weaknesses has only recently been undertaken. This chapter provides both a formal introduction to managing quantities and a practical comparison of existing techniques so that software users can judge the robustness of their systems with regards to units of measurement

    Verification of system-wide safety properties of ROS applications

    Get PDF
    Robots are currently deployed in safety-critical domains but proper techniques to assess the functional safety of their software are yet to be adopted. This is particularly critical in ROS, where highly configurable robots are built by composing third-party modules. To promote adoption, we advocate the use of lightweight formal methods, automatic techniques with minimal user input and intuitive feedback.This paper proposes a technique to automatically verify system-wide safety properties of ROS-based applications at static time. It is based in the formalization of ROS architectural models and node behaviour in Electrum, over which system-wide specifications are subsequently model checked. To automate the analysis, it is deployed as a plug-in for HAROS, a framework for the assessment of ROS software quality aimed at the ROS community. The technique is evaluated in a real robot, AgRob V16, with positive results.POFC - Programa Operacional Temático Factores de Competitividade (POCI-01-0145-FEDER-016826

    Design-time detection of physical-unit changes in product lines

    Get PDF
    Software product lines evolve over time, both as new products are added to the product line and as existing products are updated. This evolution creates unintended as well as planned changes to Systems. A persistent problem is that unintended changes are hard to detect. Often they are not discovered until testing or operations. Late discovery is a problem especially in safety-critical, cyberphysical product lines such as avionics, pacemakers, and smart-braking systems, where unintended changes may lead to accidents. This thesis proposes an approach and a prototype tool to detect unintended changes earlier in development of a new product in the product line. The capability to detect potentially risky, unintended changes at the design stage is beneficial because repair is easier, less costly, and safer in design than when detection is delayed to testing or operations. The Product Line Change Detector (PLCD) introduced here analyzes products’ SysML block and parametric diagrams, which are typical project artifacts for cyber-physical systems, in order to detect problematic, unintended changes. The PLCD software automatically detects potential change-related issues, ranks them in terms of severity using the products’ safety-analysis artifacts, and reports them to developers in a graphical format. Developers select and fix the reported issues with the assistance of the tool’s displays, with the tool recording the fixes and updating the SysML diagrams accordingly. The evaluation of PLCD’s performance and capabilities uses three product lines, extended from cyber-physical systems in the literature: NASA astronaut jetpack, vehicle dynamics, and low-earth satellite. The evaluation focuses on unintended changes that cause physical unit inconsistencies, such as between meters and feet, since those may lead to accidents in cyber-physical product lines. The evaluation results show that PLCD successfully detects such unintended changes both in a single product and between products in a software product line

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin
    • …
    corecore