1,841 research outputs found

    SCENEREPLICA: Benchmarking Real-World Robot Manipulation by Creating Reproducible Scenes

    Full text link
    We present a new reproducible benchmark for evaluating robot manipulation in the real world, specifically focusing on pick-and-place. Our benchmark uses the YCB objects, a commonly used dataset in the robotics community, to ensure that our results are comparable to other studies. Additionally, the benchmark is designed to be easily reproducible in the real world, making it accessible to researchers and practitioners. We also provide our experimental results and analyzes for model-based and model-free 6D robotic grasping on the benchmark, where representative algorithms are evaluated for object perception, grasping planning, and motion planning. We believe that our benchmark will be a valuable tool for advancing the field of robot manipulation. By providing a standardized evaluation framework, researchers can more easily compare different techniques and algorithms, leading to faster progress in developing robot manipulation methods.Comment: 12 pages, 10 figures, Project page is available at https://irvlutd.github.io/SceneReplic

    Edge computing in autonomous and collaborative assembly lines

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksIndustry 4.0 demands interconnected production lines that consist of modular assets. Recent advances of wireless communication technologies allow a large connectivity of devices and approach the performance of wireline communication, specifically regarding throughput, latency and reliability. As a result, more and more time critical connections can be performed wirelessly. Both attributes foster the emergence of edge computing, a concept that can efficiently utilize distributed computational resources. This is particularly beneficial for modular assets that have limited energy supply and capacity of computation hardware. Autonomous mobile robots offer high potential for object transportation, inspection and manipulation in shared workspaces with human operators. With edge computing, heavy computations can then be offloaded to more powerful computers or edge data centers to speed up the decision-making process and increase the productivity. For an efficient orchestration strategy of computation and communication resources, various task requirements in terms of latency, bandwidth, cost and energy must be considered. To this end, we aim at evaluating the requirements in autonomous and collaborative assembly lines, a use case that comprises diverse tasks including latency-sensitive ones in dynamic, uncertain, multi-agent environments. This work focuses on discussing latency requirements on the basis of a collaborative safety mode and autonomous robotic insertion.Peer ReviewedPostprint (author's final draft

    Collected notes from the Benchmarks and Metrics Workshop

    Get PDF
    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations

    On quantifying the value of simulation for training and evaluating robotic agents

    Full text link
    Un problème récurrent dans le domaine de la robotique est la difficulté à reproduire les résultats et valider les affirmations faites par les scientifiques. Les expériences conduites en laboratoire donnent fréquemment des résultats propres à l'environnement dans lequel elles ont été effectuées, rendant la tâche de les reproduire et de les valider ardues et coûteuses. Pour cette raison, il est difficile de comparer la performance et la robustesse de différents contrôleurs robotiques. Les environnements substituts à faibles coûts sont populaires, mais introduisent une réduction de performance lorsque l'environnement cible est enfin utilisé. Ce mémoire présente nos travaux sur l'amélioration des références et de la comparaison d'algorithmes (``Benchmarking'') en robotique, notamment dans le domaine de la conduite autonome. Nous présentons une nouvelle platforme, les Autolabs Duckietown, qui permet aux chercheurs d'évaluer des algorithmes de conduite autonome sur des tâches, du matériel et un environnement standardisé à faible coût. La plateforme offre également un environnement virtuel afin d'avoir facilement accès à une quantité illimitée de données annotées. Nous utilisons la plateforme pour analyser les différences entre la simulation et la réalité en ce qui concerne la prédictivité de la simulation ainsi que la qualité des images générées. Nous fournissons deux métriques pour quantifier l'utilité d'une simulation et nous démontrons de quelles façons elles peuvent être utilisées afin d'optimiser un environnement proxy.A common problem in robotics is reproducing results and claims made by researchers. The experiments done in robotics laboratories typically yield results that are specific to a complex setup and difficult or costly to reproduce and validate in other contexts. For this reason, it is arduous to compare the performance and robustness of various robotic controllers. Low-cost reproductions of physical environments are popular but induce a performance reduction when transferred to the target domain. This thesis present the results of our work toward improving benchmarking in robotics, specifically for autonomous driving. We build a new platform, the Duckietown Autolabs, which allow researchers to evaluate autonomous driving algorithms in a standardized framework on low-cost hardware. The platform offers a simulated environment for easy access to annotated data and parallel evaluation of driving solutions in customizable environments. We use the platform to analyze the discrepancy between simulation and reality in the case of predictivity and quality of data generated. We supply two metrics to quantify the usefulness of a simulation and demonstrate how they can be used to optimize the value of a proxy environment

    CUORE-0 results and prospects for the CUORE experiment

    Full text link
    With 741 kg of TeO2 crystals and an excellent energy resolution of 5 keV (0.2%) at the region of interest, the CUORE (Cryogenic Underground Observatory for Rare Events) experiment aims at searching for neutrinoless double beta decay of 130Te with unprecedented sensitivity. Expected to start data taking in 2015, CUORE is currently in an advanced construction phase at LNGS. CUORE projected neutrinoless double beta decay half-life sensitivity is 1.6E26 y at 1 sigma (9.5E25 y at the 90% confidence level), in five years of live time, corresponding to an upper limit on the effective Majorana mass in the range 40-100 meV (50-130 meV). Further background rejection with auxiliary bolometric detectors could improve CUORE sensitivity and competitiveness of bolometric detectors towards a full analysis of the inverted neutrino mass hierarchy. CUORE-0 was built to test and demonstrate the performance of the upcoming CUORE experiment. It consists of a single CUORE tower (52 TeO2 bolometers of 750 g each, arranged in a 13 floor structure) constructed strictly following CUORE recipes both for materials and assembly procedures. An experiment its own, CUORE-0 is expected to reach a sensitivity to the neutrinoless double beta decay half-life of 130Te around 3E24 y in one year of live time. We present an update of the data, corresponding to an exposure of 18.1 kg y. An analysis of the background indicates that the CUORE performance goal is satisfied while the sensitivity goal is within reach.Comment: 10 pages, 3 figures, to appear in the proceedings of NEUTRINO 2014, 26th International Conference on Neutrino Physics and Astrophysics, 2-7 June 2014, held at Boston, Massachusetts, US
    • …
    corecore