348 research outputs found

    Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph

    Full text link
    We study the problem of safe and intention-aware robot navigation in dense and interactive crowds. Most previous reinforcement learning (RL) based methods fail to consider different types of interactions among all agents or ignore the intentions of people, which results in performance degradation. In this paper, we propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents through space and time. To encourage longsighted robot behaviors, we infer the intentions of dynamic agents by predicting their future trajectories for several timesteps. The predictions are incorporated into a model-free RL framework to prevent the robot from intruding into the intended paths of other agents. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios. We successfully transfer the policy learned in simulation to a real-world TurtleBot 2i

    A Data-driven Model for Interaction-aware Pedestrian Motion Prediction in Object Cluttered Environments

    Full text link
    This paper reports on a data-driven, interaction-aware motion prediction approach for pedestrians in environments cluttered with static obstacles. When navigating in such workspaces shared with humans, robots need accurate motion predictions of the surrounding pedestrians. Human navigation behavior is mostly influenced by their surrounding pedestrians and by the static obstacles in their vicinity. In this paper we introduce a new model based on Long-Short Term Memory (LSTM) neural networks, which is able to learn human motion behavior from demonstrated data. To the best of our knowledge, this is the first approach using LSTMs, that incorporates both static obstacles and surrounding pedestrians for trajectory forecasting. As part of the model, we introduce a new way of encoding surrounding pedestrians based on a 1d-grid in polar angle space. We evaluate the benefit of interaction-aware motion prediction and the added value of incorporating static obstacles on both simulation and real-world datasets by comparing with state-of-the-art approaches. The results show, that our new approach outperforms the other approaches while being very computationally efficient and that taking into account static obstacles for motion predictions significantly improves the prediction accuracy, especially in cluttered environments.Comment: 8 pages, accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 201

    A Data-driven Model for Interaction-aware Pedestrian Motion Prediction in Object Cluttered Environments

    Full text link
    This paper reports on a data-driven, interaction-aware motion prediction approach for pedestrians in environments cluttered with static obstacles. When navigating in such workspaces shared with humans, robots need accurate motion predictions of the surrounding pedestrians. Human navigation behavior is mostly influenced by their surrounding pedestrians and by the static obstacles in their vicinity. In this paper we introduce a new model based on Long-Short Term Memory (LSTM) neural networks, which is able to learn human motion behavior from demonstrated data. To the best of our knowledge, this is the first approach using LSTMs, that incorporates both static obstacles and surrounding pedestrians for trajectory forecasting. As part of the model, we introduce a new way of encoding surrounding pedestrians based on a 1d-grid in polar angle space. We evaluate the benefit of interaction-aware motion prediction and the added value of incorporating static obstacles on both simulation and real-world datasets by comparing with state-of-the-art approaches. The results show, that our new approach outperforms the other approaches while being very computationally efficient and that taking into account static obstacles for motion predictions significantly improves the prediction accuracy, especially in cluttered environments.Comment: 8 pages, accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 201

    ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ดํ•ด ๊ธฐ๋ฐ˜ ์ธ๊ฐ„ ๋กœ๋ด‡ ํ˜‘์—…

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€,2020. 2. ์ด๋ฒ”ํฌ.Human-robot cooperation is unavoidable in various applications ranging from manufacturing to field robotics owing to the advantages of adaptability and high flexibility. Especially, complex task planning in large, unconstructed, and uncertain environments can employ the complementary capabilities of human and diverse robots. For a team to be effectives, knowledge regarding team goals and current situation needs to be effectively shared as they affect decision making. In this respect, semantic scene understanding in natural language is one of the most fundamental components for information sharing between humans and heterogeneous robots, as robots can perceive the surrounding environment in a form that both humans and other robots can understand. Moreover, natural-language-based scene understanding can reduce network congestion and improve the reliability of acquired data. Especially, in field robotics, transmission of raw sensor data increases network bandwidth and decreases quality of service. We can resolve this problem by transmitting information in the form of natural language that has encoded semantic representations of environments. In this dissertation, I introduce a human and heterogeneous robot cooperation scheme based on semantic scene understanding. I generate sentences and scene graphs, which is a natural language grounded graph over the detected objects and their relationships, with the graph map generated using a robot mapping algorithm. Subsequently, a framework that can utilize the results for cooperative mission planning of humans and robots is proposed. Experiments were performed to verify the effectiveness of the proposed methods. This dissertation comprises two parts: graph-based scene understanding and scene understanding based on the cooperation between human and heterogeneous robots. For the former, I introduce a novel natural language processing method using a semantic graph map. Although semantic graph maps have been widely applied to study the perceptual aspects of the environment, such maps do not find extensive application in natural language processing tasks. Several studies have been conducted on the understanding of workspace images in the field of computer vision; in these studies, the sentences were automatically generated, and therefore, multiple scenes have not yet been utilized for sentence generation. A graph-based convolutional neural network, which comprises spectral graph convolution and graph coarsening, and a recurrent neural network are employed to generate sentences attention over graphs. The proposed method outperforms the conventional methods on a publicly available dataset for single scenes and can be utilized for sequential scenes. Recently, deep learning has demonstrated impressive developments in scene understanding using natural language. However, it has not been extensively applied to high-level processes such as causal reasoning, analogical reasoning, or planning. The symbolic approach that calculates the sequence of appropriate actions by combining the available skills of agents outperforms in reasoning and planning; however, it does not entirely consider semantic knowledge acquisition for human-robot information sharing. An architecture that combines deep learning techniques and symbolic planner for human and heterogeneous robots to achieve a shared goal based on semantic scene understanding is proposed for scene understanding based on human-robot cooperation. In this study, graph-based perception is used for scene understanding. A planning domain definition language (PDDL) planner and JENA-TDB are utilized for mission planning and data acquisition storage, respectively. The effectiveness of the proposed method is verified in two situations: a mission failure, in which the dynamic environment changes, and object detection in a large and unseen environment.์ธ๊ฐ„๊ณผ ์ด์ข… ๋กœ๋ด‡ ๊ฐ„์˜ ํ˜‘์—…์€ ๋†’์€ ์œ ์—ฐ์„ฑ๊ณผ ์ ์‘๋ ฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์—์„œ ์ œ์กฐ์—…์—์„œ ํ•„๋“œ ๋กœ๋ณดํ‹ฑ์Šค๊นŒ์ง€ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ํ•„์—ฐ์ ์ด๋‹ค. ํŠนํžˆ, ์„œ๋กœ ๋‹ค๋ฅธ ๋Šฅ๋ ฅ์„ ์ง€๋‹Œ ๋กœ๋ด‡๋“ค๊ณผ ์ธ๊ฐ„์œผ๋กœ ๊ตฌ์„ฑ๋œ ํ•˜๋‚˜์˜ ํŒ€์€ ๋„“๊ณ  ์ •ํ˜•ํ™”๋˜์ง€ ์•Š์€ ๊ณต๊ฐ„์—์„œ ์„œ๋กœ์˜ ๋Šฅ๋ ฅ์„ ๋ณด์™„ํ•˜๋ฉฐ ๋ณต์žกํ•œ ์ž„๋ฌด ์ˆ˜ํ–‰์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ๋‹ค๋Š” ์ ์—์„œ ํฐ ์žฅ์ ์„ ๊ฐ–๋Š”๋‹ค. ํšจ์œจ์ ์ธ ํ•œ ํŒ€์ด ๋˜๊ธฐ ์œ„ํ•ด์„œ๋Š”, ํŒ€์˜ ๊ณตํ†ต๋œ ๋ชฉํ‘œ ๋ฐ ๊ฐ ํŒ€์›์˜ ํ˜„์žฌ ์ƒํ™ฉ์— ๊ด€ํ•œ ์ •๋ณด๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ ํ•จ๊ป˜ ์˜์‚ฌ ๊ฒฐ์ •์„ ํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ด€์ ์—์„œ, ์ž์—ฐ์–ด๋ฅผ ํ†ตํ•œ ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ดํ•ด๋Š” ์ธ๊ฐ„๊ณผ ์„œ๋กœ ๋‹ค๋ฅธ ๋กœ๋ด‡๋“ค์ด ๋ชจ๋‘ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•ํƒœ๋กœ ํ™˜๊ฒฝ์„ ์ธ์ง€ํ•œ๋‹ค๋Š” ์ ์—์„œ ๊ฐ€์žฅ ํ•„์ˆ˜์ ์ธ ์š”์†Œ์ด๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ์ž์—ฐ์–ด ๊ธฐ๋ฐ˜ ํ™˜๊ฒฝ ์ดํ•ด๋ฅผ ํ†ตํ•ด ๋„คํŠธ์›Œํฌ ํ˜ผ์žก์„ ํ”ผํ•จ์œผ๋กœ์จ ํš๋“ํ•œ ์ •๋ณด์˜ ์‹ ๋ขฐ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ, ๋Œ€๋Ÿ‰์˜ ์„ผ์„œ ๋ฐ์ดํ„ฐ ์ „์†ก์— ์˜ํ•ด ๋„คํŠธ์›Œํฌ ๋Œ€์—ญํญ์ด ์ฆ๊ฐ€ํ•˜๊ณ  ํ†ต์‹  QoS (Quality of Service) ์‹ ๋ขฐ๋„๊ฐ€ ๊ฐ์†Œํ•˜๋Š” ๋ฌธ์ œ๊ฐ€ ๋นˆ๋ฒˆํžˆ ๋ฐœ์ƒํ•˜๋Š” ํ•„๋“œ ๋กœ๋ณดํ‹ฑ์Šค ์˜์—ญ์—์„œ๋Š” ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ •๋ณด์ธ ์ž์—ฐ์–ด๋ฅผ ์ „์†กํ•จ์œผ๋กœ์จ ํ†ต์‹  ๋Œ€์—ญํญ์„ ๊ฐ์†Œ์‹œํ‚ค๊ณ  ํ†ต์‹  QoS ์‹ ๋ขฐ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” ํ™˜๊ฒฝ์˜ ์˜๋ฏธ๋ก ์  ์ดํ•ด ๊ธฐ๋ฐ˜ ์ธ๊ฐ„ ๋กœ๋ด‡ ํ˜‘๋™ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•œ๋‹ค. ๋จผ์ €, ๋กœ๋ด‡์˜ ์ง€๋„ ์ž‘์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ด ํš๋“ํ•œ ๊ทธ๋ž˜ํ”„ ์ง€๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ์ž์—ฐ์–ด ๋ฌธ์žฅ๊ณผ ๊ฒ€์ถœํ•œ ๊ฐ์ฒด ๋ฐ ๊ฐ ๊ฐ์ฒด ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ์ž์—ฐ์–ด ๋‹จ์–ด๋กœ ํ‘œํ˜„ํ•˜๋Š” ๊ทธ๋ž˜ํ”„๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๊ฒฐ๊ณผ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ธ๊ฐ„๊ณผ ๋‹ค์–‘ํ•œ ๋กœ๋ด‡๋“ค์ด ํ•จ๊ป˜ ํ˜‘์—…ํ•˜์—ฌ ์ž„๋ฌด๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์€ ํฌ๊ฒŒ ๊ทธ๋ž˜ํ”„๋ฅผ ์ด์šฉํ•œ ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ดํ•ด์™€ ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ดํ•ด๋ฅผ ํ†ตํ•œ ์ธ๊ฐ„๊ณผ ์ด์ข… ๋กœ๋ด‡ ๊ฐ„์˜ ํ˜‘์—… ๋ฐฉ๋ฒ•์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ๋จผ์ €, ๊ทธ๋ž˜ํ”„๋ฅผ ์ด์šฉํ•œ ์˜๋ฏธ๋ก ์  ํ™˜๊ฒฝ ์ดํ•ด ๋ถ€๋ถ„์—์„œ๋Š” ์˜๋ฏธ๋ก ์  ๊ทธ๋ž˜ํ”„ ์ง€๋„๋ฅผ ์ด์šฉํ•œ ์ƒˆ๋กœ์šด ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•œ๋‹ค. ์˜๋ฏธ๋ก ์  ๊ทธ๋ž˜ํ”„ ์ง€๋„ ์ž‘์„ฑ ๋ฐฉ๋ฒ•์€ ๋กœ๋ด‡์˜ ํ™˜๊ฒฝ ์ธ์ง€ ์ธก๋ฉด์—์„œ ๋งŽ์ด ์—ฐ๊ตฌ๋˜์—ˆ์ง€๋งŒ ์ด๋ฅผ ์ด์šฉํ•œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์€ ๊ฑฐ์˜ ์—ฐ๊ตฌ๋˜์ง€ ์•Š์•˜๋‹ค. ๋ฐ˜๋ฉด ์ปดํ“จํ„ฐ ๋น„์ „ ๋ถ„์•ผ์—์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ด์šฉํ•œ ํ™˜๊ฒฝ ์ดํ•ด ์—ฐ๊ตฌ๊ฐ€ ๋งŽ์ด ์ด๋ฃจ์–ด์กŒ์ง€๋งŒ, ์—ฐ์†์ ์ธ ์žฅ๋ฉด๋“ค์€ ๋‹ค๋ฃจ๋Š”๋ฐ๋Š” ํ•œ๊ณ„์ ์ด ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์šฐ๋ฆฌ๋Š” ๊ทธ๋ž˜ํ”„ ์ŠคํŽ™ํŠธ๋Ÿผ ์ด๋ก ์— ๊ธฐ๋ฐ˜ํ•œ ๊ทธ๋ž˜ํ”„ ์ปจ๋ณผ๋ฃจ์…˜๊ณผ ๊ทธ๋ž˜ํ”„ ์ถ•์†Œ ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋œ ๊ทธ๋ž˜ํ”„ ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง ๋ฐ ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ ๊ทธ๋ž˜ํ”„๋ฅผ ์„ค๋ช…ํ•˜๋Š” ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค ํ•œ ์žฅ๋ฉด์— ๋Œ€ํ•ด ํ–ฅ์ƒ๋œ ์„ฑ๋Šฅ์„ ๋ณด์˜€์œผ๋ฉฐ ์—ฐ์†๋œ ์žฅ๋ฉด๋“ค์— ๋Œ€ํ•ด์„œ๋„ ์„ฑ๊ณต์ ์œผ๋กœ ์ž์—ฐ์–ด ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•œ๋‹ค. ์ตœ๊ทผ ๋”ฅ๋Ÿฌ๋‹์€ ์ž์—ฐ์–ด ๊ธฐ๋ฐ˜ ํ™˜๊ฒฝ ์ธ์ง€์— ์žˆ์–ด ๊ธ‰์†๋„๋กœ ํฐ ๋ฐœ์ „์„ ์ด๋ฃจ์—ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ธ๊ณผ ์ถ”๋ก , ์œ ์ถ”์  ์ถ”๋ก , ์ž„๋ฌด ๊ณ„ํš๊ณผ ๊ฐ™์€ ๋†’์€ ์ˆ˜์ค€์˜ ํ”„๋กœ์„ธ์Šค์—๋Š” ์ ์šฉ์ด ํž˜๋“ค๋‹ค. ๋ฐ˜๋ฉด ์ž„๋ฌด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ์žˆ์–ด ๊ฐ ์—์ด์ „ํŠธ์˜ ๋Šฅ๋ ฅ์— ๋งž๊ฒŒ ํ–‰์œ„๋“ค์˜ ์ˆœ์„œ๋ฅผ ๊ณ„์‚ฐํ•ด์ฃผ๋Š” ์ƒ์ง•์  ์ ‘๊ทผ๋ฒ•(symbolic approach)์€ ์ถ”๋ก ๊ณผ ์ž„๋ฌด ๊ณ„ํš์— ์žˆ์–ด ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡๋“ค ์‚ฌ์ด์˜ ์˜๋ฏธ๋ก ์  ์ •๋ณด ๊ณต์œ  ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๊ฑฐ์˜ ๋‹ค๋ฃจ์ง€ ์•Š๋Š”๋‹ค. ๋”ฐ๋ผ์„œ, ์ธ๊ฐ„๊ณผ ์ด์ข… ๋กœ๋ด‡ ๊ฐ„์˜ ํ˜‘์—… ๋ฐฉ๋ฒ• ๋ถ€๋ถ„์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•๋“ค๊ณผ ์ƒ์ง•์  ํ”Œ๋ž˜๋„ˆ(symbolic planner)๋ฅผ ์—ฐ๊ฒฐํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ œ์•ˆํ•˜์—ฌ ์˜๋ฏธ๋ก ์  ์ดํ•ด๋ฅผ ํ†ตํ•œ ์ธ๊ฐ„ ๋ฐ ์ด์ข… ๋กœ๋ด‡ ๊ฐ„์˜ ํ˜‘์—…์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์˜๋ฏธ๋ก ์  ์ฃผ๋ณ€ ํ™˜๊ฒฝ ์ดํ•ด๋ฅผ ์œ„ํ•ด ์ด์ „ ๋ถ€๋ถ„์—์„œ ์ œ์•ˆํ•œ ๊ทธ๋ž˜ํ”„ ๊ธฐ๋ฐ˜ ์ž์—ฐ์–ด ๋ฌธ์žฅ ์ƒ์„ฑ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. PDDL ํ”Œ๋ž˜๋„ˆ์™€ JENA-TDB๋Š” ๊ฐ๊ฐ ์ž„๋ฌด ๊ณ„ํš ๋ฐ ์ •๋ณด ํš๋“ ์ €์žฅ์†Œ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์˜ ํšจ์šฉ์„ฑ์€ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ๋‘ ๊ฐ€์ง€ ์ƒํ™ฉ์— ๋Œ€ํ•ด์„œ ๊ฒ€์ฆํ•œ๋‹ค. ํ•˜๋‚˜๋Š” ๋™์  ํ™˜๊ฒฝ์—์„œ ์ž„๋ฌด ์‹คํŒจ ์ƒํ™ฉ์ด๋ฉฐ ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ๋„“์€ ๊ณต๊ฐ„์—์„œ ๊ฐ์ฒด๋ฅผ ์ฐพ๋Š” ์ƒํ™ฉ์ด๋‹ค.1 Introduction 1 1.1 Background and Motivation 1 1.2 Literature Review 5 1.2.1 Natural Language-Based Human-Robot Cooperation 5 1.2.2 Artificial Intelligence Planning 5 1.3 The Problem Statement 10 1.4 Contributions 11 1.5 Dissertation Outline 12 2 Natural Language-Based Scene Graph Generation 14 2.1 Introduction 14 2.2 Related Work 16 2.3 Scene Graph Generation 18 2.3.1 Graph Construction 19 2.3.2 Graph Inference 19 2.4 Experiments 22 2.5 Summary 25 3 Language Description with 3D Semantic Graph 26 3.1 Introduction 26 3.2 Related Work 26 3.3 Natural Language Description 29 3.3.1 Preprocess 29 3.3.2 Graph Feature Extraction 33 3.3.3 Natural Language Description with Graph Features 34 3.4 Experiments 35 3.5 Summary 42 4 Natural Question with Semantic Graph 43 4.1 Introduction 43 4.2 Related Work 45 4.3 Natural Question Generation 47 4.3.1 Preprocess 49 4.3.2 Graph Feature Extraction 50 4.3.3 Natural Question with Graph Features 51 4.4 Experiments 52 4.5 Summary 58 5 PDDL Planning with Natural Language 59 5.1 Introduction 59 5.2 Related Work 60 5.3 PDDL Planning with Incomplete World Knowledge 61 5.3.1 Natural Language Process for PDDL Planning 63 5.3.2 PDDL Planning System 64 5.4 Experiments 65 5.5 Summary 69 6 PDDL Planning with Natural Language-Based Scene Understanding 70 6.1 Introduction 70 6.2 Related Work 74 6.3 A Framework for Heterogeneous Multi-Agent Cooperation 77 6.3.1 Natural Language-Based Cognition 78 6.3.2 Knowledge Engine 80 6.3.3 PDDL Planning Agent 81 6.4 Experiments 82 6.4.1 Experiment Setting 82 6.4.2 Scenario 84 6.4.3 Results 87 6.5 Summary 91 7 Conclusion 92Docto

    Body gestures recognition for human robot interaction

    Get PDF
    In this project, a solution for human gesture classification is proposed. The solution uses a Deep Learning model and is meant to be useful for non-verbal communication between humans and robots. The State-of-the-Art is researched in an effort to achieve a model ready to work with natural gestures without restrictions. The research will focus on the creation of a temPoral bOdy geSTUre REcognition model (POSTURE) that can recognise continuous gestures performed in real-life situations. The suggested model takes into account spatial and temporal components so as to achieve the recognition of more natural and intuitive gestures. In a first step, a framework extracts from all the images the corresponding landmarks for each of the body joints. Next, some data filtering techniques are applied with the aim of avoiding problems related with the data. Afterwards, the filtered data is input into an State-of-the-Art Neural Network. And finally, different neural network configurations and approaches are tested to find the optimal performance. The obtained outcome shows the research has been done in the right track and how, despite of the dataset problems found, even better results can be achievedObjectius de Desenvolupament Sostenible::9 - Indรบstria, Innovaciรณ i Infraestructur

    Ornithopter Trajectory Optimization with Neural Networks and Random Forest

    Get PDF
    Trajectory optimization has recently been addressed to compute energy-efficient routes for ornithopter navigation, but its online application remains a challenge. To overcome the high computation time of traditional approaches, this paper proposes algorithms that recursively generate trajectories based on the output of neural networks and random forest. To this end, we create a large data set composed by energy-efficient trajectories obtained by running a competitive planner. To the best of our knowledge our proposed data set is the first one with a high number of pseudo-optimal paths for ornithopter trajectory optimization. We compare the performance of three methods to compute low-cost trajectories: two classification approaches to learn maneuvers and an alternative regression method that predicts new states. The algorithms are tested in several scenarios, including the landing case. The effectiveness and efficiency of the proposed algorithms are demonstrated through simulation, which show that the machine learning techniques can be used to compute the flight path of the ornithopter in real time, even under uncertainties such as wrong sensor readings or re-positioning of the target. Random Forest obtains the higher performance with more than 99% and 97% of accuracy in a landing and a mid-range scenario, respectively.Ministerio de Economรญa y Competitividad MTM2016-76272- R AEI/FEDER,UEMinisterio de Ciencia e Innovaciรณn PID2020-114154RB-I00Uniรณn Europea, Horizon 2020, Marie Sklodowska-Curie grant agreement #73492

    Neural network based country wise risk prediction of COVID-19

    Get PDF
    The recent worldwide outbreak of the novel coronavirus (COVID-19) has opened up new challenges to the research community. Artificial intelligence (AI) driven methods can be useful to predict the parameters, risks, and effects of such an epidemic. Such predictions can be helpful to control and prevent the spread of such diseases. The main challenges of applying AI is the small volume of data and the uncertain nature. Here, we propose a shallow long short-term memory (LSTM) based neural network to predict the risk category of a country. We have used a Bayesian optimization framework to optimize and automatically design country-specific networks. The results show that the proposed pipeline outperforms state-of-the-art methods for data of 180 countries and can be a useful tool for such risk categorization. We have also experimented with the trend data and weather data combined for the prediction. The outcome shows that the weather does not have a significant role. The tool can be used to predict long-duration outbreak of such an epidemic such that we can take preventive steps earlie
    • โ€ฆ
    corecore