2,889 research outputs found
Shapes-from-silhouettes based 3D reconstruction for athlete evaluation during exercising
Shape-from-silhouettes is a very powerful tool to create a 3D reconstruction of an object using a limited number of cameras which are all facing an overlapping area. Synchronously captured video frames add the possibility of 3D reconstruction on a frame-by-frame-basis making it possible to watch movements in 3D. This 3D model can be viewed from any direction and therefore adds a lot of information for both athletes and coaches
One-shot 3d surface reconstruction from instantaneous frequencies: solutions to ambiguity problems
Phase-measuring profilometry is a well known technique for 3D surface reconstruction based on a sinusoidal pattern that is projected on a scene. If the surface is partly occluded by, for instance, other objects, then the depth shows abrupt transitions at the edges of these occlusions. This causes ambiguities in the phase and, consequently, also in the reconstruction.\ud
This paper introduces a reconstruction method that is based on the instantaneous frequency instead of phase. Using these instantaneous frequencies we present a method to recover from ambiguities caused by occlusion. The recovery works under the condition that some surface patches can be found that are planar. This ability is demonstrated in a simple example. \u
Automated multimodal volume registration based on supervised 3D anatomical landmark detection
We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm
Circle-based Eye Center Localization (CECL)
We propose an improved eye center localization method based on the Hough
transform, called Circle-based Eye Center Localization (CECL) that is simple,
robust, and achieves accuracy on a par with typically more complex
state-of-the-art methods. The CECL method relies on color and shape cues that
distinguish the iris from other facial structures. The accuracy of the CECL
method is demonstrated through a comparison with 15 state-of-the-art eye center
localization methods against five error thresholds, as reported in the
literature. The CECL method achieved an accuracy of 80.8% to 99.4% and ranked
first for 2 of the 5 thresholds. It is concluded that the CECL method offers an
attractive alternative to existing methods for automatic eye center
localization.Comment: Published and presented at The 14th IAPR International Conference on
Machine Vision Applications, 2015. http://www.mva-org.jp/mva2015
Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding
Classifying single image patches is important in many different applications,
such as road detection or scene understanding. In this paper, we present
convolutional patch networks, which are convolutional networks learned to
distinguish different image patches and which can be used for pixel-wise
labeling. We also show how to incorporate spatial information of the patch as
an input to the network, which allows for learning spatial priors for certain
categories jointly with an appearance model. In particular, we focus on road
detection and urban scene understanding, two application areas where we are
able to achieve state-of-the-art results on the KITTI as well as on the
LabelMeFacade dataset.
Furthermore, our paper offers a guideline for people working in the area and
desperately wandering through all the painstaking details that render training
CNs on image patches extremely difficult.Comment: VISAPP 2015 pape
- …