532 research outputs found

    X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments

    Full text link
    Modern robotic systems are required to operate in challenging environments, which demand reliable localization under challenging conditions. LiDAR-based localization methods, such as the Iterative Closest Point (ICP) algorithm, can suffer in geometrically uninformative environments that are known to deteriorate point cloud registration performance and push optimization toward divergence along weakly constrained directions. To overcome this issue, this work proposes i) a robust fine-grained localizability detection module, and ii) a localizability-aware constrained ICP optimization module, which couples with the localizability detection module in a unified manner. The proposed localizability detection is achieved by utilizing the correspondences between the scan and the map to analyze the alignment strength against the principal directions of the optimization as part of its fine-grained LiDAR localizability analysis. In the second part, this localizability analysis is then integrated into the scan-to-map point cloud registration to generate drift-free pose updates by enforcing controlled updates or leaving the degenerate directions of the optimization unchanged. The proposed method is thoroughly evaluated and compared to state-of-the-art methods in simulated and real-world experiments, demonstrating the performance and reliability improvement in LiDAR-challenging environments. In all experiments, the proposed framework demonstrates accurate and generalizable localizability detection and robust pose estimation without environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics. Supplementary Video: https://youtu.be/SviLl7q69aA Project Website: https://sites.google.com/leggedrobotics.com/x-ic

    Verification of Model Transformations

    Get PDF

    MCFlow: Middleware for Mixed-Criticality Distributed Real-Time Systems

    Get PDF
    Traditional fixed-priority scheduling analysis for periodic/sporadic task sets is based on the assumption that all tasks are equally critical to the correct operation of the system. Therefore, every task has to be schedulable under the scheduling policy, and estimates of tasks\u27 worst case execution times must be conservative in case a task runs longer than is usual. To address the significant under-utilization of a system\u27s resources under normal operating conditions that can arise from these assumptions, several \emph{mixed-criticality scheduling} approaches have been proposed. However, to date there has been no quantitative comparison of system schedulability or run-time overhead for the different approaches. In this dissertation, we present what is to our knowledge the first side-by-side implementation and evaluation of those approaches, for periodic and sporadic mixed-criticality tasks on uniprocessor or distributed systems, under a mixed-criticality scheduling model that is common to all these approaches. To make a fair evaluation of mixed-criticality scheduling, we also address some previously open issues and propose modifications to improve schedulability and correctness of particular approaches. To facilitate the development and evaluation of mixed-criticality applications, we have designed and developed a distributed real-time middleware, called MCFlow, for mixed-criticality end-to-end tasks running on multi-core platforms. The research presented in this dissertation provides the following contributions to the state of the art in real-time middleware: (1) an efficient component model through which dependent subtask graphs can be configured flexibly for execution within a single core, across cores of a common host, or spanning multiple hosts; (2) support for optimizations to inter-component communication to reduce data copying without sacrificing the ability to execute subtasks in parallel; (3) a strict separation of timing and functional concerns so that they can be configured independently; (4) an event dispatching architecture that uses lock free algorithms where possible to reduce memory contention, CPU context switching, and priority inversion; and (5) empirical evaluations of MCFlow itself and of different mixed criticality scheduling approaches both with a single host and end-to-end across multiple hosts. The results of our evaluation show that in terms of basic distributed real-time behavior MCFlow performs comparably to the state of the art TAO real-time object request broker when only one core is used and outperforms TAO when multiple cores are involved. We also identify and categorize different use cases under which different mixed criticality scheduling approaches are preferable

    CSR and perceived price fairness : an analysis on willingness to pay and perceived benefit

    Get PDF
    The objective of this study is to assess the potential effect that engagement in Corporate Social Responsibility (CSR) may have on consumers’ Perceived Price Fairness. Following the literature that already developed on this topic, the study approaches the knowledge gap in the field by, simultaneously, considering Willingness to Pay, here measured using the Price Sensitivity Meter by Van Westendorp (1976) and Perceived Benefit of active CSR engagement. The study followed an experimental approach via an online survey, concerning three types of products and two social causes supported by CSR engagement. To gather insights, the study follows the Price Sensitivity Meter framework to measure different pricing options and strategies for products from firms who actively engage in CSR. The results point put that to two of the three products under analysis make for increased Willingness to Pay. On all cases, respondents pointed out an increase in added perceived benefit when faced with CSR activities. On the two cases where a positive effect was registered, consumers’ Perceived Benefit increase outmeasured the growth in Willingness to Pay, making a case that CSR engagement indeed provides for an increase in Perceived Price Fairness. In one of the cases. Consumers recognized an increase on Perceived Benefit but their Willingness to Pay followed an opposite direction.O presente estudo tem como objetivo identificar se o compromisso de empresas em ações de Responsabilidade Social Corporativa (RSC) tem efeito na Perceção de Justiça de Preço dos consumidores. Seguindo autores que já exploraram o tema, esta dissertação tenta colmatar uma falha no conhecimento no campo ao contemplar, em simultâneo, a Disponibilidade de Compra por parte dos consumidores e sua perceção de benefício adquirido através da implementação de políticas de RSC. O estudo seguiu uma abordagem experimental através de um questionário online, considerando três tipos de produtos e duas causas sociais apoiadas pelas atividades de RSC. De forma a cogitar conclusões, o estudo segue o modelo do Medidor de Sensibilidade ao Preço de modo a medir diferentes opções e estratégias de preço em produtos oriundos de empresas que ativamente desenvolvam atividades de RSC. Os resultados mostram que em dois dos três produtos analisados existe um aumento na Disponibilidade de Compra. Em todos os casos, registou-se um aumento na perceção de benefício adquirido por parte dos consumidores. Nos dois casos em que se verifica um efeito positivo por parte das atividades de RSC, o aumento do benefício adquirido suplantou o aumento da Disponibilidade de Compra, sendo um indicador de que as atividades de RSC potenciam, de facto, um aumento na Perceção de Justiça de Preço nos consumidores. Num dos casos, os consumidores identificaram um aumento na sua Perceção de Benefício mas a sua Disponibilidade de Compra seguiu um comportamento oposto

    On the engineering of crucial software

    Get PDF
    The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described

    Towards the First Practical Applications of Quantum Computers

    Full text link
    Noisy intermediate-scale quantum (NISQ) computers are coming online. The lack of error-correction in these devices prevents them from realizing the full potential of fault-tolerant quantum computation, a technology that is known to have significant practical applications, but which is years, if not decades, away. A major open question is whether NISQ devices will have practical applications. In this thesis, we explore and implement proposals for using NISQ devices to achieve practical applications. In particular, we develop and execute variational quantum algorithms for solving problems in combinatorial optimization and quantum chemistry. We also execute a prototype of a protocol for generating certified random numbers. We perform our experiments on a superconducting qubit processor developed at Google. While we do not perform any quantum computations that are beyond the capabilities of classical computers, we address many implementation challenges that must be overcome to succeed in such an endeavor, including optimization, efficient compilation, and error mitigation. In addressing these challenges, we push the limits of what can currently be done with NISQ technology, going beyond previous quantum computing demonstrations in terms of the scale of our experiments and the types of problems we tackle. While our experiments demonstrate progress in the utilization of quantum computers, the limits that we reached underscore the fundamental challenges in scaling up towards the classically intractable regime. Nevertheless, our results are a promising indication that NISQ devices may indeed deliver practical applications.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163016/1/kevjsung_1.pd

    Shedding light on TQM: some research findings

    Get PDF
    Over the last decad , the paradigm of Total Quality Management (TQM) has been successfully forged in our business world. TQM may be defined as something that is both complex and ambiguous; nevertheless, some key elements or principles can be mentioned which are common to all of them: customer satisfaction, continuous improvement, commitment and leadership on the part of top management, involvement and support on the part of employees, teamwork, measurement via indicators and feedback. There are, in short, two main reasons for it having spread so widely: on the one hand, the successful diffusion of ISO 9000 standards for the implementation and certification of quality management systems, standards that have been associated to the TQM paradigm, and, on the other, the also successful diffusion of self evaluation models such as the EFQM promoted by the European Foundation for Quality Management and the Malcolm Baldrige National Quality Award in the USA, promoted by the Foundation for the Malcolm Baldrige National Quality Award. However, the quality movement is not without its problems as far as its mid and long term development is concerned. In this book some research findings related to these issues are presented

    Automation And Visualization Of Program Correctness For Automatically Generating Code

    Get PDF
    Program synthesis systems can be highly advantageous in that users can automatically generate code to fit a wide variety of applications from high-level specifications without needing any low-level programming skills or knowledge of which type of data structures and algorithms should be used. NASA has developed and uses two of these systems, AUTOFILTER and AUTOBAYES. Though much is gained in terms of time and cost efficiency in the use of these systems, they suffer from an issue that is inherent in all code generator systems, the verifiability of the correctness of the generated code against the input specifications. Many times, this verification process can take just as long, if not longer than manually developing and testing the code would have been. Because of this, much work has been done by NASA and others to develop methods for automatic certification that can be produced along with the program and are easy to use. However, there is still more work to be done in this area, especially in the area of automatic visual verification (e.g., by using UML diagrams to provide visual aid in the verification of the generated code). Work has been done by Grant et al. in collaboration with NASA to develop a rigorous approach to system correctness verification that uses domain-specific graphical meta-models of the expected input/output systems with identified constraints on the input/output and their relationships. Though this approach has been applied to AUTOFILTER, it has yet to be applied to other domains. In this work, Grant’s approach is extended to the data analysis domain by being applied to AUTOBAYES. A model of the input specification for AUTOBAYES was obtained for the case in which a normal distribution of data is assumed. This model, derived from the AUTOBAYES input files, the n-dimensional Gaussian equation, and allowed priors, is a UML class diagram (CD). Similarly, a UML CD model of the AUTOBAYES program output was derived. These CD\u27s were then used to develop 30 constraints on the input, the output, and the relationship between them. These constraints were then transformed into the OCL formal specification language and analyzed with the USE tool, along with the derived comprehensive CD (i.e., a combination of the input CD, output CD, and the relationships between each other). These models and constraints were used to successfully check that all of the developed constraints were satisfied with the model representing AUTOBAYES. Unfortunately, a configuration for a full validation with USE was not obtained, after several iterations, due to project time restrictions. However, the results obtained adequately demonstrate that this method can be extended to the domain of AUTOBAYES. This work was motivated both due to its relevance to NASA in the chosen case study of AUTOBAYES as well to show that Grant’s approach can be extended to other domains beyond AUTOFILTER
    corecore