2,348 research outputs found

    MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning

    Full text link
    This paper describes a framework called MaestROB. It is designed to make the robots perform complex tasks with high precision by simple high-level instructions given by natural language or demonstration. To realize this, it handles a hierarchical structure by using the knowledge stored in the forms of ontology and rules for bridging among different levels of instructions. Accordingly, the framework has multiple layers of processing components; perception and actuation control at the low level, symbolic planner and Watson APIs for cognitive capabilities and semantic understanding, and orchestration of these components by a new open source robot middleware called Project Intu at its core. We show how this framework can be used in a complex scenario where multiple actors (human, a communication robot, and an industrial robot) collaborate to perform a common industrial task. Human teaches an assembly task to Pepper (a humanoid robot from SoftBank Robotics) using natural language conversation and demonstration. Our framework helps Pepper perceive the human demonstration and generate a sequence of actions for UR5 (collaborative robot arm from Universal Robots), which ultimately performs the assembly (e.g. insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018. Video: https://www.youtube.com/watch?v=19JsdZi0TW

    Low-Code/No-Code Artificial Intelligence Platforms for the Health Informatics Domain

    Get PDF
    In the contemporary health informatics space, Artificial Intelligence (AI) has become a necessity for the extraction of actionable knowledge in a timely manner. Low-code/No-Code (LCNC) AI Platforms enable domain experts to leverage the value that AI has to offer by lowering the technical skills overhead. We develop domain-specific, service-orientated platforms in the context of two subdomains of health informatics. We address in this work the core principles and the architectures of these platforms whose functionality we are constantly extending. Our work conforms to best practices with respect to the integration and interoperability of external services and provides process orchestration in a LCNC modeldriven fashion. We chose the CINCO product DIME and a bespoke tool developed in CINCO Cloud to serve as the underlying infrastructure for our LCNC platforms which address the requirements from our two application domains; public health and biomedical research. In the context of public health, an environment for building AI driven web applications for the automated evaluation of Web-based Health Information (WBHI). With respect to biomedical research, an AI driven workflow environment for the computational analysis of highly-plexed tissue images. We extended both underlying application stacks to support the various AI service functionality needed to address the requirements of the two application domains. The two case studies presented outline the methodology of developing these platforms through co-design with experts in the respective domains. Moving forward we anticipate we will increasingly re-use components which will reduce the development overhead for extending our existing platforms or developing new applications in similar domains

    Hyper: Distributed Cloud Processing for Large-Scale Deep Learning Tasks

    Full text link
    Training and deploying deep learning models in real-world applications require processing large amounts of data. This is a challenging task when the amount of data grows to a hundred terabytes, or even, petabyte-scale. We introduce a hybrid distributed cloud framework with a unified view to multiple clouds and an on-premise infrastructure for processing tasks using both CPU and GPU compute instances at scale. The system implements a distributed file system and failure-tolerant task processing scheduler, independent of the language and Deep Learning framework used. It allows to utilize unstable cheap resources on the cloud to significantly reduce costs. We demonstrate the scalability of the framework on running pre-processing, distributed training, hyperparameter search and large-scale inference tasks utilizing 10,000 CPU cores and 300 GPU instances with the overall processing power of 30 petaflops
    • …
    corecore