4 research outputs found

    It4innovations/hyperqueue: v0.17.0-rc1

    No full text
    <h1>HyperQueue 0.17.0-rc1</h1> <h2>Breaking change</h2> <h3>Memory resource in megabytes</h3> <ul> <li>Automatically detected resource "mem" that is the size of RAM of a worker is now using megabytes as a unit. i.e. <code>--resource mem=100</code> asks now for 100 MiB (previously 100 bytes).</li> </ul> <h2>New features</h2> <h3>Non-integer resource requests</h3> <ul> <li>You may now ask of non-integer amount of a resource. e.g. for 0.5 of GPU. This enables resource sharing on the logical level of HyperQueue scheduler and allows to utilize remaining part the resource by another tasks.</li> </ul> <h3>Job submission</h3> <ul> <li>You can now specify <code>cleanup modes</code> when passing <code>stdout</code>/<code>stderr</code> paths to tasks. Cleanup mode decides what should happen with the file once the task has finished executing. Currently, a single cleanup mode is implemented, which removes the file if the task has finished successfully:</li> </ul> <pre><code class="language-bash">$ hq submit --stdout=out.txt:rm-if-finished /my-program </code></pre> <h2>Fixes</h2> <ul> <li>Fixed crash when task fails during its initialization</li> </ul> <h1>Artifact summary:</h1> <ul> <li><strong>hq-v0.17.0-rc1-*</strong>: Main HyperQueue build containing the <code>hq</code> binary. <strong>Download this archive to use HyperQueue from the command line</strong>.</li> <li><strong>hyperqueue-0.17.0-rc1-*</strong>: Wheel containing the <code>hyperqueue</code> package with HyperQueue Python bindings.</li> </ul&gt

    A depth image quality benchmark of three popular low-cost depth cameras

    No full text
    A depth camera outputs an image in which each pixel depicts the distance between the camera plane and the corresponding point on the image plane. Low-cost depth cameras are becoming commonplace and given their applications in the field of machine vision, one must carefully select the right device according to the environment in which the camera will be used given the accuracy these cameras can be associated with factors such as distance from the target, luminosity of the environment, etc. This paper aims to compare three depth cameras currently available in the market, Intel RealSense D435, which uses stereo vision to compute depth at pixels, ASUS Xtion and Microsoft Kinect 2 represent Time of flight-based depth cameras. The comparison will be based on how the cameras perform at different distances from a flat surface and we will check if the colour of the surface affects the depth image quality.Web of Science20204200419

    Tuning perception and motion planning parameters for Moveit! framework

    No full text
    This paper performs a benchmark of main parameters of perception and planning available in MoveIt! motion planning framework in order to identify parameters the most affecting the overall performance of the system. The initial benchmark is performed on a virtual simulation of UR3 robot workspace with a single obstacle. The performance is measured by means of successful runs, path planning and execution durations. The results of the benchmark are processed and, based on the results, three parameters are chosen to be optimized using Particle Swarm Optimization. The optimization of the parameters is performed for the same motion planning problem as presented in the first benchmark. In order to test the performance of the system with optimized parameters, four more benchmarks are performed using the simulated and real robot workspace. The results of the benchmarks indicate improvements in most of the measured indicators.Web of Science20204163415
    corecore