18,590 research outputs found
KittingBot: A Mobile Manipulation Robot for Collaborative Kitting in Automotive Logistics
Individualized manufacturing of cars requires kitting: the collection of
individual sets of part variants for each car. This challenging logistic task
is frequently performed manually by warehouseman. We propose a mobile
manipulation robotic system for autonomous kitting, building on the Kuka Miiwa
platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa
manipulator, cameras, and distance sensors. Software modules for detection and
pose estimation of transport boxes, part segmentation in these containers,
recognition of part variants, grasp generation, and arm trajectory optimization
have been developed and integrated. Our system is designed for collaborative
kitting, i.e. some parts are collected by warehouseman while other parts are
picked by the robot. To address safe human-robot collaboration, fast arm
trajectory replanning considering previously unforeseen obstacles is realized.
The developed system was evaluated in the European Robotics Challenge 2, where
the Miiwa robot demonstrated autonomous kitting, part variant recognition, and
avoidance of unforeseen obstacles.Comment: Accepted and published at IAS-15
(http://conference.vde.com/ias/Pages/Homepage.aspx
Suction Grasp Region Prediction using Self-supervised Learning for Object Picking in Dense Clutter
This paper focuses on robotic picking tasks in cluttered scenario. Because of
the diversity of poses, types of stack and complicated background in bin
picking situation, it is much difficult to recognize and estimate their pose
before grasping them. Here, this paper combines Resnet with U-net structure, a
special framework of Convolution Neural Networks (CNN), to predict picking
region without recognition and pose estimation. And it makes robotic picking
system learn picking skills from scratch. At the same time, we train the
network end to end with online samples. In the end of this paper, several
experiments are conducted to demonstrate the performance of our methods.Comment: 6 pages, 7 figures, conferenc
PI-Edge: A Low-Power Edge Computing System for Real-Time Autonomous Driving Services
To simultaneously enable multiple autonomous driving services on affordable
embedded systems, we designed and implemented {\pi}-Edge, a complete edge
computing framework for autonomous robots and vehicles. The contributions of
this paper are three-folds: first, we developed a runtime layer to fully
utilize the heterogeneous computing resources of low-power edge computing
systems; second, we developed an extremely lightweight operating system to
manage multiple autonomous driving services and their communications; third, we
developed an edge-cloud coordinator to dynamically offload tasks to the cloud
to optimize client system energy consumption. To the best of our knowledge,
this is the first complete edge computing system of a production autonomous
vehicle. In addition, we successfully implemented {\pi}-Edge on a Nvidia Jetson
and demonstrated that we could successfully support multiple autonomous driving
services with only 11 W of power consumption, and hence proving the
effectiveness of the proposed {\pi}-Edge system
Stereo Vision Based Single-Shot 6D Object Pose Estimation for Bin-Picking by a Robot Manipulator
We propose a fast and accurate method of 6D object pose estimation for
bin-picking of mechanical parts by a robot manipulator. We extend the
single-shot approach to stereo vision by application of attention architecture.
Our convolutional neural network model regresses to object locations and
rotations from either a left image or a right image without depth information.
Then, a stereo feature matching module, designated as Stereo Grid Attention,
generates stereo grid matching maps. The important point of our method is only
to calculate disparity of the objects found by the attention from stereo
images, instead of calculating a point cloud over the entire image. The
disparity value is then used to calculate the depth to the objects by the
principle of triangulation. Our method also achieves a rapid processing speed
of pose estimation by the single-shot architecture and it is possible to
process a 1024 x 1024 pixels image in 75 milliseconds on the Jetson AGX Xavier
implemented with half-float model. Weakly textured mechanical parts are used to
exemplify the method. First, we create original synthetic datasets for training
and evaluating of the proposed model. This dataset is created by capturing and
rendering numerous 3D models of several types of mechanical parts in virtual
space. Finally, we use a robotic manipulator with an electromagnetic gripper to
pick up the mechanical parts in a cluttered state to verify the validity of our
method in an actual scene. When a raw stereo image is used by the proposed
method from our stereo camera to detect black steel screws, stainless screws,
and DC motor parts, i.e., cases, rotor cores and commutator caps, the
bin-picking tasks are successful with 76.3%, 64.0%, 50.5%, 89.1% and 64.2%
probability, respectively.Comment: 7 pages, 8 figure
Modeling 3D scanned data to visualize the built environment
Capturing and modeling 3D information of the built environment is a big challenge. A number of techniques and technologies are now in use. These include EDM, GPS and photogrammetric application and also remote sensing applications.
In this paper, we discussed 3D laser scanning technology, which can acquire high density point data in a accurate, fast way. Therefore, it can provide benefits for refurbishment process in the built environment. The scanner can digitize all the 3D information concerned with a building down to millimetre detail. A series of scans externally and internally allows an accurate 3D model of the
building to be produced. This model can be "sliced" through different planes to produce accurate 2D plans and elevations. This novel technology improves the efficiency and quality of construction projects, such as maintenance of
buildings or group of buildings that are going to be renovated for new services. Although data capture is more efficient using laser scanner than most other techniques, data modeling still presents significant research
problems. These are addressed in this paper.
The paper describes the research undertaken in the EU funded (FP6 IP) INTELCITIES project concerning 3D laser scanner technology for CAD modeling and its integration with various systems such as 3D printing and VR projection systems. It also considers research to be undertaken in the EU funded (INTERREG) Virtual Environmental Planning Systems (VEPS) project in the next 2 years. Following this, an approach for data modeling of scanned data is introduced, through which the information belonging to existing buildings can be stored in a database to use in building, urban, and regional scale models
Micro-Doppler Based Human-Robot Classification Using Ensemble and Deep Learning Approaches
Radar sensors can be used for analyzing the induced frequency shifts due to
micro-motions in both range and velocity dimensions identified as micro-Doppler
(-D) and micro-Range (-R), respectively.
Different moving targets will have unique -D and
-R signatures that can be used for target classification.
Such classification can be used in numerous fields, such as gait recognition,
safety and surveillance. In this paper, a 25 GHz FMCW Single-Input
Single-Output (SISO) radar is used in industrial safety for real-time
human-robot identification. Due to the real-time constraint, joint
Range-Doppler (R-D) maps are directly analyzed for our classification problem.
Furthermore, a comparison between the conventional classical learning
approaches with handcrafted extracted features, ensemble classifiers and deep
learning approaches is presented. For ensemble classifiers, restructured range
and velocity profiles are passed directly to ensemble trees, such as gradient
boosting and random forest without feature extraction. Finally, a Deep
Convolutional Neural Network (DCNN) is used and raw R-D images are directly fed
into the constructed network. DCNN shows a superior performance of 99\%
accuracy in identifying humans from robots on a single R-D map.Comment: 6 pages, accepted in IEEE Radar Conference 201
Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy
Neural circuits can be reconstructed from brain images acquired by serial
section electron microscopy. Image analysis has been performed by manual labor
for half a century, and efforts at automation date back almost as far.
Convolutional nets were first applied to neuronal boundary detection a dozen
years ago, and have now achieved impressive accuracy on clean images. Robust
handling of image defects is a major outstanding challenge. Convolutional nets
are also being employed for other tasks in neural circuit reconstruction:
finding synapses and identifying synaptic partners, extending or pruning
neuronal reconstructions, and aligning serial section images to create a 3D
image stack. Computational systems are being engineered to handle petavoxel
images of cubic millimeter brain volumes
CAVBench: A Benchmark Suite for Connected and Autonomous Vehicles
Connected and autonomous vehicles (CAVs) have recently attracted a
significant amount of attention both from researchers and industry. Numerous
studies targeting algorithms, software frameworks, and applications on the CAVs
scenario have emerged. Meanwhile, several pioneer efforts have focused on the
edge computing system and architecture design for the CAVs scenario and
provided various heterogeneous platform prototypes for CAVs. However, a
standard and comprehensive application benchmark for CAVs is missing, hindering
the study of these emerging computing systems. To address this challenging
problem, we present CAVBench, the first benchmark suite for the edge computing
system in the CAVs scenario. CAVBench is comprised of six typical applications
covering four dominate CAVs scenarios and takes four datasets as standard
input. CAVBench provides quantitative evaluation results via application and
system perspective output metrics. We perform a series of experiments and
acquire three systemic characteristics of the applications in CAVBench. First,
the operation intensity of the applications is polarized, which explains why
heterogeneous hardware is important for a CAVs computing system. Second, all
applications in CAVBench consume high memory bandwidth, so the system should be
equipped with high bandwidth memory or leverage good memory bandwidth
management to avoid the performance degradation caused by memory bandwidth
competition. Third, some applications have worse data/instruction locality
based on the cache miss observation, so the computing system targeting these
applications should optimize the cache architecture. Last, we use the CAVBench
to evaluate a typical edge computing platform and present the quantitative and
qualitative analysis of the benchmarking results.Comment: 13 pages, The Third ACM/IEEE Symposium on Edge Computing 2018 SE
Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms
When a computational task tolerates a relaxation of its specification or when
an algorithm tolerates the effects of noise in its execution, hardware,
programming languages, and system software can trade deviations from correct
behavior for lower resource usage. We present, for the first time, a synthesis
of research results on computing systems that only make as many errors as their
users can tolerate, from across the disciplines of computer aided design of
circuits, digital system design, computer architecture, programming languages,
operating systems, and information theory.
Rather than over-provisioning resources at each layer to avoid errors, it can
be more efficient to exploit the masking of errors occurring at one layer which
can prevent them from propagating to a higher layer. We survey tradeoffs for
individual layers of computing systems from the circuit level to the operating
system level and illustrate the potential benefits of end-to-end approaches
using two illustrative examples. To tie together the survey, we present a
consistent formalization of terminology, across the layers, which does not
significantly deviate from the terminology traditionally used by research
communities in their layer of focus.Comment: 35 page
- …