3 research outputs found

    FogROS2: An Adaptive Platform for Cloud and Fog Robotics Using ROS 2

    Full text link
    Mobility, power, and price points often dictate that robots do not have sufficient computing power on board to run contemporary robot algorithms at desired rates. Cloud computing providers such as AWS, GCP, and Azure offer immense computing power on demand, but tapping into that power from a robot is non-trivial. We present FogROS2, an open-source platform to facilitate cloud and fog robotics that is compatible with the emerging Robot Operating System 2 (ROS 2) standard. FogROS2 is completely redesigned and distinct from its predecessor FogROS1 in 9 ways, and has lower latency, overhead, and startup times; improved usability, and additional automation, such as region and computer type selection. Additionally, FogROS2 was added to the official distribution of ROS 2, gaining performance, timing, and additional improvements associated with ROS 2. In examples, FogROS2 reduces SLAM latency by 50 %, reduces grasp planning time from 14 s to 1.2 s, and speeds up motion planning 28x. When compared to FogROS1, FogROS2 reduces network utilization by up to 3.8x, improves startup time by 63 %, and network round-trip latency by 97 % for images using video compression. The source code, examples, and documentation for FogROS2 are available at https://github.com/BerkeleyAutomation/FogROS2, and is available through the official ROS 2 repository at https://index.ros.org/p/fogros2

    Approximating Grasp Sampling Distribution using Normalizing Flow

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์ง€๋Šฅ์ •๋ณด์œตํ•ฉํ•™๊ณผ, 2022. 8. ๋ฐ•์žฌํฅ.๋กœ๋ด‡์ด ๋ฌผ์ฒด๋ฅผ ์žก์œผ๋ ค๋ฉด ์–ด๋””๋ฅผ ์žก์•„์•ผ ํ•˜๋Š”์ง€ ์ธ์ง€ํ•ด์•ผ ํ•œ๋‹ค. ๋กœ๋ด‡์˜ ํŒŒ์ง€ ๊ณผ ์ •์€ ๋ฌผ์ฒด๋ฅผ ์ธ์‹ํ•˜๊ณ , ๋ชฉํ‘œ๋ฌผ์— ๋Œ€ํ•ด ํŒŒ์ง€ ํ›„๋ณด ์ƒ์„ฑ์„ ํ•œ๋‹ค. ์ƒ์„ฑ๋œ ํŒŒ์ง€ ํ›„๋ณด๋“ค ์ค‘ ์ฃผ์œ„ ํ™˜๊ฒฝ๊ณผ ํŒŒ์ง€์˜ ์•ˆ์ •์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ ํŒŒ์ง€๋ฅผ ๊ฒฐ์ •ํ•œ๋‹ค. ์ด๋•Œ, ์ƒ์„ฑ๋œ ํŒŒ์ง€๋Š” ์ •ํ™•ํ•˜๊ณ , ๋‹ค์–‘ํ• ์ˆ˜๋ก ์ข‹๋‹ค. ์™œ๋ƒํ•˜๋ฉด, ์ƒ์„ฑ๋œ ํŒŒ์ง€๊ฐ€ ์ •ํ™•ํ• ์ˆ˜๋ก ๋™์ผ ํ™˜๊ฒฝ์— ํšจ ์œจ์ ์ธ ํŒŒ์ง€ ์ƒ์„ฑ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ํŒŒ์ง€ ํ›„๋ณด๋“ค์ด ๋‹ค์–‘ํ• ์ˆ˜๋ก ์—ฌ๋Ÿฌ ํ™˜๊ฒฝ ํ˜น์€ ํ™˜๊ฒฝ์˜ ๋ณ€ํ™”์— ๋Œ€์ฒ˜ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ตœ๊ทผ, ๋‹ค์–‘ํ•œ ํŒŒ์ง€ ํ›„๋ณด์˜ ์ƒ์„ฑ ๋ฐ ๊ฒฐ์ •ํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ๋งŽ์ด ์ง„ํ–‰๋˜์—ˆ๋‹ค. ๋‹ค์–‘ํ•œ ํŒŒ์ง€ ํ›„๋ณด ์ƒ์„ฑ์˜ ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ™œ์šฉํ•˜์—ฌ ๋ฌผ์ฒด์˜ ๊ฐ€๋Šฅํ•œ ํŒŒ์ง€์˜ ๋ถ„ํฌ๋ฅผ ํ•™์Šตํ•œ๋‹ค. ํ•˜์ง€๋งŒ, ๋ฌผ์ฒด์˜ ๊ฐ€๋Šฅํ•œ ํŒŒ์ง€ ์ž์„ธ๋“ค์€ ๋ณต์žกํ•˜๊ณ  ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌํ•œ ๋ถ„ํฌ๋ฅผ ๋„ ๋ฉฐ, ๋ฌผ์ฒด์— ๋”ฐ๋ผ ๊ฐ€๋Šฅํ•œ ํŒŒ์ง€ ์ž์„ธ๋“ค์€ ๋ณ€ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํŠน์„ฑ์„ ๋„๋Š” ํŒŒ์ง€ ํ›„๋ณด๋“ค์„ ์‹ค์ œ๋กœ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ต๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋ณต์žกํ•œ ํ˜•์ƒ์„ ๊ฐ€์ง„ ๋ฌผ์ฒด์˜ ํŒŒ์ง€ ๋ถ„ํฌ๋ฅผ ๋ณด๋‹ค ์ •ํ™•ํ•˜๊ฒŒ ๊ทผ์‚ฌํ•˜๊ธฐ ์œ„ํ•ด ๋…ธ๋ง๋ผ์ด์ง• ํ”Œ๋กœ์šฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šตํ•œ๋‹ค. ๋ฌผ์ฒด์— ๋”ฐ๋ฅธ ๊ฐ€๋Šฅํ•œ ํŒŒ์ง€ ์ž์„ธ๋ฅผ ์กฐ๊ฑด ํ™•๋ฅ ๋กœ ๋ชจ๋ธ๋งํ•˜๊ณ , ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ํ•™์Šตํ•œ ๊ฒฐ๊ณผ ํŒŒ์ง€ ๋ถ„ํฌ์˜ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ์„ฑ ์„ ์ž˜ ๋ฐ˜์˜ํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์„ ๊ธฐ์กด ๋ฐฉ๋ฒ•๊ณผ ๋น„๊ต ์‹คํ—˜์„ ํ†ตํ•ด ๋‹ค์–‘ํ•˜๊ณ , ์ •ํ™•ํ•œ ํŒŒ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์„ ์‹คํ—˜์ ์œผ๋กœ ๋ณด์ธ๋‹ค.When a robot grasps an object, it needs to reason where to grasp. The robot grasping process is as follows. First, a robot recognizes the target object and generates grasp candidates for it. Among the generated grasp candidates, grasp is determined in consideration of the surrounding environment and the stability of the grasp. The generated grasp poses have to be accurate and diverse. This is because accurate grasps mean that it generates grasp efficiently without repeatedly find good grasp in the same environment. Also, diverse grasps allow handling large variation of environment. Recently, many studies have been conducted to generate and determine grasp candidates. Among the most commonly adopted methods, generative model is used to model diverse grasp candidates. It approximates diverse grasp candidates as feasible grasp distribution and extracts samples from the learned distribution. However, feasible grasp poses are complex and multimodal. Also, they vary according to object shape. In practice, it is difficult to learn the diverse feasible grasp poses into grasp distribution. In this work, we accurately approximate the feasible grasp distribution using normalizing flow. We model feasible grasp distribution according to the object with conditional probability density function. This approach better learns multi-modality. We show that the proposed method generates diverse and accurate grasp poses compared to existing method.์ œ 1 ์žฅ ์„œ ๋ก  1 ์ œ 1 ์ ˆ ์—ฐ๊ตฌ ๋™ํ–ฅ 2 ์ œ 2 ์ ˆ ์—ฐ๊ตฌ ๊ธฐ์—ฌ 5 ์ œ 3 ์ ˆ ๋…ผ๋ฌธ ๊ตฌ์„ฑ 5 ์ œ 2 ์žฅ ์—ฐ ๊ตฌ ๋ฐฉ ๋ฒ• 7 ์ œ 1 ์ ˆ ๋…ธ๋ง๋ผ์ด์ง• ํ”Œ๋กœ์šฐ๋ฅผ ์ด์šฉํ•œ ํŒŒ์ง€ ๋ถ„ํฌ ๊ทผ์‚ฌ 7 2.1.1 ๋…ธ๋ง๋ผ์ด์ง• ํ”Œ๋กœ์šฐ 7 2.1.2 ์กฐ๊ฑด ๋…ธ๋ง๋ผ์ด์ง• ํ”Œ๋กœ์šฐ 9 2.1.3 ์ œ์•ˆํ•œ ํŒŒ์ง€ ์ƒ์„ฑ ๋ชจ๋ธ 11 ์ œ 2 ์ ˆ ํ‰๊ฐ€ ์ง€ํ‘œ 13 ์ œ 3 ์žฅ ์‹ค ํ—˜ 16 ์ œ 1 ์ ˆ ๋ฌผ์ฒด์— ๋”ฐ๋ฅธ ํŒŒ์ง€ ๋ถ„ํฌ ํ•™์Šต 16 3.1.1 ํŒŒ์ง€ ๋ฐ์ดํ„ฐ ์…‹ 16 3.1.2 ๋ชจ๋ธ ๊ตฌ์กฐ ๋ฐ ํ•™์Šต 17 3.1.3 ๊ฒฐ๊ณผ ๋ถ„์„ 20 ์ œ 2 ์ ˆ ํŒŒ์ง€ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ 24 3.2.1 ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐฉ๋ฒ• 24 3.2.2 ๊ฒฐ๊ณผ ๋ถ„์„ 27 ์ œ 3 ์ ˆ ๊ฒฐ๊ณผ ์ข…ํ•ฉ 28 ์ œ 4 ์žฅ ๊ฒฐ ๋ก  31 ์ฐธ๊ณ  ๋ฌธํ—Œ 32 Abstract 39์„
    corecore