16 research outputs found

    Hibikino-Musashi@Home 2023 Team Description Paper

    Full text link
    This paper describes an overview of the techniques of Hibikino-Musashi@Home, which intends to participate in the domestic standard platform league. The team has developed a dataset generator for the training of a robot vision system and an open-source development environment running on a human support robot simulator. The robot system comprises self-developed libraries including those for motion synthesis and open-source software works on the robot operating system. The team aims to realize a home service robot that assists humans in a home, and continuously attend the competition to evaluate the developed system. The brain-inspired artificial intelligence system is also proposed for service robots which are expected to work in a real home environment

    From Commands to Goal-based Dialogs: A Roadmap to Achieve Natural Language Interaction in RoboCup@Home

    Full text link
    On the one hand, speech is a key aspect to people's communication. On the other, it is widely acknowledged that language proficiency is related to intelligence. Therefore, intelligent robots should be able to understand, at least, people's orders within their application domain. These insights are not new in RoboCup@Home, but we lack of a long-term plan to evaluate this approach. In this paper we conduct a brief review of the achievements on automated speech recognition and natural language understanding in RoboCup@Home. Furthermore, we discuss main challenges to tackle in spoken human-robot interaction within the scope of this competition. Finally, we contribute by presenting a pipelined road map to engender research in the area of natural language understanding applied to domestic service robotics.Comment: 12 pages, 2 tables, 1 figure. Accepted and presented (poster) in the RoboCup 2018 Symposium. In pres

    World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for Room Tidying with Mobile Manipulator

    Full text link
    Tidying up a household environment using a mobile manipulator poses various challenges in robotics, such as adaptation to large real-world environmental variations, and safe and robust deployment in the presence of humans.The Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global competition held in September 2021, benchmarked tidying tasks in the real home environments, and importantly, tested for full system performances.For this challenge, we developed an entire household service robot system, which leverages a data-driven approach to adapt to numerous edge cases that occur during the execution, instead of classical manual pre-programmed solutions. In this paper, we describe the core ingredients of the proposed robot system, including visual recognition, object manipulation, and motion planning. Our robot system won the second prize, verifying the effectiveness and potential of data-driven robot systems for mobile manipulation in home environments

    Object Recognition System using Deep Learning with Depth Images for Service Robots

    Get PDF
    In an aging society with fewer children, service robots are expected to play an increasingly important role in people’s lives. To realize a future with service robots, a generic object recognition system is necessary to recognize a wide variety of objects with a high degree of accuracy. Therefore, this study employs deep convolutional neural networks for the generic object recognition system. To improve the accuracy of object recognition, both RGB images and depth images can be used effectively. In this paper, we propose a new architecture “Dual Stream - VGG16 (DS-VGG16)” for a deep convolutional neural network to train both the RGB images and depth images, and we also present a new training method for the proposed architecture. The experimental results indicate that the proposed architecture and training method are effective. Finally, we develop an object recognition system based on the proposed method that has an interface of robot operating system for integrating the system into service robots.2018 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS2018), 27 - 30, November 2018, Okinawa, Japa

    Semi-automatic Dataset Generation for Object Detection and Recognition and its Evaluation on Domestic Service Robots

    Get PDF
    This paper proposes a method for the semi-automatic generation of a dataset for deep neural networks to perform end-to-end object detection and classification from images, which is expected to be applied to domestic service robots. In the proposed method, the background image of the floor or furniture is first captured. Subsequently, objects are captured from various viewpoints. Then, the background image and the object images are composited by the system (software) to generate images of the virtual scenes expected to be encountered by the robot. At this point, the annotation files, which will be used as teaching signals by the deep neural network, are automatically generated, as the region and category of the object composited with the background image are known. This reduces the human workload for dataset generation. Experiment results showed that the proposed method reduced the time taken to generate a data unit from 167 s, when performed manually, to 0.58 s, i.e., by a factor of approximately 1/287. The dataset generated using the proposed method was used to train a deep neural network, which was then applied to a domestic service robot for evaluation. The robot was entered into the World Robot Challenge, in which, out of ten trials, it succeeded in touching the target object eight times and grasping it four times

    A Hardware Intelligent Processing Accelerator for Domestic Service Robots

    Get PDF
    We present a method for implementing hardware intelligent processing accelerator on domestic service robots. These domestic service robots support human life; therefore, they are required to recognize environments using intelligent processing. Moreover, the intelligent processing requires large computational resources. Therefore, standard personal computers (PCs) with robot middleware on the robots do not have enough resources for this intelligent processing. We propose a ‘connective object for middleware to an accelerator (COMTA),’ which is a system that integrates hardware intelligent processing accelerators and robot middleware. Herein, by constructing dedicated architecture digital circuits, field-programmable gate arrays (FPGAs) accelerate intelligent processing. In addition, the system can configure and access applications on hardware accelerators via a robot middleware space; consequently, robotic engineers do not require the knowledge of FPGAs. We conducted an experiment on the proposed system by utilizing a human-following application with image processing, which is commonly applied in the robots. Experimental results demonstrated that the proposed system can be automatically constructed from a single-configuration file on the robot middleware and can execute the application 5.2 times more efficiently than an ordinary PC

    A Hardware Accelerated Robot Middleware Package for Intelligent Processing on Robots

    Get PDF
    Service robots require implementation of intelligent processing, e.g., image processing. However, the computational resources of standard PCs typically used in service robots are not sufficient for such processes. Furthermore, robot middleware is widely used in many robots because such systems facilitate integration and are suitable for rapid prototyping. We propose a "connective object for middleware to accelerator (COMTA)," which is a processing system that uses hardware accelerators, i.e., field programmable gate arrays (FPGAs), and robot middleware. Users can access the FPGAs in the proposed system via middleware interfaces; thus, complex internal circuits are not required. For human tracking using image processing, the proposed system can automatically generate from a single configuration file. The proposed system performs 3.3 times more efficiently relative to computation than standard PCs in robots.IEEE International Symposium on Circuits and Systems (ISCAS 2018), May 27-30, 2018, Florence, Ital
    corecore