2,880 research outputs found
Multi-camera complexity assessment system for assembly line work stations
In the last couple of years, the market demands an increasing number of product variants. This leads to an inevitable rise of the complexity in manufacturing systems. A model to quantify the complexity in a workstation has been developed, but part of the analysis is done manually. Thereto, this paper presents the results of an industrial proof-of-concept in which the possibility of automating the complexity analysis using multi camera video images, was tested
Cell-based approach for 3D reconstruction from incomplete silhouettes
Shape-from-silhouettes is a widely adopted approach to compute accurate 3D reconstructions of people or objects in a multi-camera environment. However, such algorithms are traditionally very sensitive to errors in the silhouettes due to imperfect foreground-background estimation or occluding objects appearing in front of the object of interest. We propose a novel algorithm that is able to still provide high quality reconstruction from incomplete silhouettes. At the core of the method is the partitioning of reconstruction space in cells, i.e. regions with uniform camera and silhouette coverage properties. A set of rules is proposed to iteratively add cells to the reconstruction based on their potential to explain discrepancies between silhouettes in different cameras. Experimental analysis shows significantly improved F1-scores over standard leave-M-out reconstruction techniques
Capturing natural-colour 3D models of insects for species discovery
Collections of biological specimens are fundamental to scientific
understanding and characterization of natural diversity. This paper presents a
system for liberating useful information from physical collections by bringing
specimens into the digital domain so they can be more readily shared, analyzed,
annotated and compared. It focuses on insects and is strongly motivated by the
desire to accelerate and augment current practices in insect taxonomy which
predominantly use text, 2D diagrams and images to describe and characterize
species. While these traditional kinds of descriptions are informative and
useful, they cannot cover insect specimens "from all angles" and precious
specimens are still exchanged between researchers and collections for this
reason. Furthermore, insects can be complex in structure and pose many
challenges to computer vision systems. We present a new prototype for a
practical, cost-effective system of off-the-shelf components to acquire
natural-colour 3D models of insects from around 3mm to 30mm in length. Colour
images are captured from different angles and focal depths using a digital
single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images
are processed into 3D reconstructions using software based on a visual hull
algorithm. The resulting models are compact (around 10 megabytes), afford
excellent optical resolution, and can be readily embedded into documents and
web pages, as well as viewed on mobile devices. The system is portable, safe,
relatively affordable, and complements the sort of volumetric data that can be
acquired by computed tomography. This system provides a new way to augment the
description and documentation of insect species holotypes, reducing the need to
handle or ship specimens. It opens up new opportunities to collect data for
research, education, art, entertainment, biodiversity assessment and
biosecurity control.Comment: 24 pages, 17 figures, PLOS ONE journa
SilNet : Single- and Multi-View Reconstruction by Learning from Silhouettes
The objective of this paper is 3D shape understanding from single and
multiple images. To this end, we introduce a new deep-learning architecture and
loss function, SilNet, that can handle multiple views in an order-agnostic
manner. The architecture is fully convolutional, and for training we use a
proxy task of silhouette prediction, rather than directly learning a mapping
from 2D images to 3D shape as has been the target in most recent work.
We demonstrate that with the SilNet architecture there is generalisation over
the number of views -- for example, SilNet trained on 2 views can be used with
3 or 4 views at test-time; and performance improves with more views.
We introduce two new synthetics datasets: a blobby object dataset useful for
pre-training, and a challenging and realistic sculpture dataset; and
demonstrate on these datasets that SilNet has indeed learnt 3D shape. Finally,
we show that SilNet exceeds the state of the art on the ShapeNet benchmark
dataset, and use SilNet to generate novel views of the sculpture dataset.Comment: BMVC 2017; Best Poste
Camera Calibration from Dynamic Silhouettes Using Motion Barcodes
Computing the epipolar geometry between cameras with very different
viewpoints is often problematic as matching points are hard to find. In these
cases, it has been proposed to use information from dynamic objects in the
scene for suggesting point and line correspondences.
We propose a speed up of about two orders of magnitude, as well as an
increase in robustness and accuracy, to methods computing epipolar geometry
from dynamic silhouettes. This improvement is based on a new temporal
signature: motion barcode for lines. Motion barcode is a binary temporal
sequence for lines, indicating for each frame the existence of at least one
foreground pixel on that line. The motion barcodes of two corresponding
epipolar lines are very similar, so the search for corresponding epipolar lines
can be limited only to lines having similar barcodes. The use of motion
barcodes leads to increased speed, accuracy, and robustness in computing the
epipolar geometry.Comment: Update metadat
A model-based approach to recovering the structure of a plant from images
We present a method for recovering the structure of a plant directly from a
small set of widely-spaced images. Structure recovery is more complex than
shape estimation, but the resulting structure estimate is more closely related
to phenotype than is a 3D geometric model. The method we propose is applicable
to a wide variety of plants, but is demonstrated on wheat. Wheat is made up of
thin elements with few identifiable features, making it difficult to analyse
using standard feature matching techniques. Our method instead analyses the
structure of plants using only their silhouettes. We employ a generate-and-test
method, using a database of manually modelled leaves and a model for their
composition to synthesise plausible plant structures which are evaluated
against the images. The method is capable of efficiently recovering accurate
estimates of plant structure in a wide variety of imaging scenarios, with no
manual intervention
- âŠ