8,852 research outputs found
3D Scanning System for Automatic High-Resolution Plant Phenotyping
Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing
structures make plants difficult for three-dimensional (3D) scanning and
reconstruction -- two critical steps in automated visual phenotyping. Many
current solutions such as laser scanning, structured light, and multiview
stereo can struggle to acquire usable 3D models because of limitations in
scanning resolution and calibration accuracy. In response, we have developed a
fast, low-cost, 3D scanning platform to image plants on a rotating stage with
two tilting DSLR cameras centred on the plant. This uses new methods of camera
calibration and background removal to achieve high-accuracy 3D reconstruction.
We assessed the system's accuracy using a 3D visual hull reconstruction
algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum
plants and 2 wheat plants across different sets of tilt angles. Scan times
ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes
(to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas
and perimeters of the plastic models were measured manually and compared to
measurements from the scanning system: results were within 3-4% of each other.
The 3D reconstructions obtained with the scanning system show excellent
geometric agreement with all six plant specimens, even plants with thin leaves
and fine stems.Comment: 8 papes, DICTA 201
A mask-based approach for the geometric calibration of thermal-infrared cameras
Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin
A non-invasive technique for burn area measurement
The need for a reliable and accurate method for assessing the surface area of burn wounds currently exists in the branch of medicine involved with burn care and treatment. The percentage of the surface area is of critical importance in evaluating fluid replacement amounts and nutritional support during the 24 hours of postburn therapy. A noninvasive technique has been developed which facilitates the measurement of burn area. The method we shall describe is an inexpensive technique to measure the burn areas accurately.
Our imaging system is based on a technique known as structured light. Most structured light computer imaging systems, including ours, use triangulation to determine the location of points in three dimensions as the intersection of two lines: a ray of light originating from the structured light projector and the line of sight determined by the location of the image point in the camera plane. The geometry used to determine 3D location by triangulation is identical to the geometry of other stereo-based vision system, including the human vision system.
Our system projects a square grid pattern from 35mm slide onto the patient. The grid on the slide is composed of uniformly spaced orthogonal stripes which may be indexed by row and column. Each slide also has square markers placed in between time lines of the grid in both the horizontal and vertical directions in the center of the slide. Our system locates intersections of the projected grid stripes in the camera image and determines the 3D location of the corresponding points on the body by triangulation.
Four steps are necessary in order to reconstruct 3D locations of points on the surface of the skin: camera and projector calibration; image processing to locate the grid intersections in the camera image; grid labeling to establish the correspondence between projected and imaged intersections; and triangulation to determine three-dimensional position. Three steps are required to segment burned portion in image: edge detection to get the strongest edges of the region; edge following to form a closed boundary; and region filling to identify the burn region. After combining the reconstructed 3D locations and segmented image, numerical analysis and geometric modeling techniques are used to calculate the burn area. We use cubic spline interpolation, bicubic surface patches and Gaussian quadrature double integration to calculate the burn wound area.
The accuracy of this technique is demonstrated The benefits and advantages of this technique are, first, that we don’t have to make any assumptions about the shape of the human body and second, there is no need for either the Rule-of-Nines, or the weight and height of the patient. This technique can be used for human body shape, regardless of weight proportion, size, sex or skin pigmentation.
The low cost, intuitive method, and demonstrated efficiency of this computer imaging technique makes it a desirable alternative to current methods and provides the burn care specialist with a sterile, safe, and effective diagnostic tool in assessing and investigating burn areas
Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy
In this paper we present a simple and robust method for self-correction of
camera distortion using single images of scenes which contain straight lines.
Since the most common distortion can be modelled as radial distortion, we
illustrate the method using the Harris radial distortion model, but the method
is applicable to any distortion model. The method is based on transforming the
edgels of the distorted image to a 1-D angular Hough space, and optimizing the
distortion correction parameters which minimize the entropy of the
corresponding normalized histogram. Properly corrected imagery will have fewer
curved lines, and therefore less spread in Hough space. Since the method does
not rely on any image structure beyond the existence of edgels sharing some
common orientations and does not use edge fitting, it is applicable to a wide
variety of image types. For instance, it can be applied equally well to images
of texture with weak but dominant orientations, or images with strong vanishing
points. Finally, the method is performed on both synthetic and real data
revealing that it is particularly robust to noise.Comment: 9 pages, 5 figures Corrected errors in equation 1
Application for photogrammetry of organisms
Single-camera photogrammetry is a well-established procedure to retrieve quantitative
information from objects using photography. In biological sciences, photogrammetry is
often applied to aid in morphometry studies, focusing on the comparative study of shapes
and organisms. Two types of photogrammetry are used in morphometric studies: 2D
photogrammetry, where distance and angle measurements are used to quantitatively
describe attributes of an object, and 3D photogrammetry, where data on landmark
coordinates are used to reconstruct an object true shape. Although there are excellent
software tools for 3D photogrammetry available, software specifically designed to aid in
the somewhat simpler 2D photogrammetry are lacking. Therefore, most studies applying
2D photogrammetry, still rely on manual acquisition of measurements from pictures, that
must then be scaled to an appropriate measuring system. This is often a laborious multistep process, on most cases utilizing diverse software to complete different tasks. In
addition to being time-consuming, it is also error-prone since measurement recording is
often made manually. The present work aimed at tackling those issues by implementing
a new cross-platform software able to integrate and streamline the photogrammetry
workflow usually applied in 2D photogrammetry studies. Results from a preliminary
study show a decrease of 45% in processing time when using the software developed in
the scope of this work in comparison with a competing methodology. Existing limitations
and future work towards improved versions of the software are discussed.Fotogrametria em câmera única é um procedimento bem estabelecido para recolher
dados quantitativos de objectos através de fotografias. Em biologia, fotogrametria é
frequentemente aplicada no contexto de estudos morfométricos, focando-se no estudo
comparativo de formas e organismos. Nos estudos morfométricos são utilizados dois tipos
de aplicação fotogramétrica: fotogrametria 2D, onde são utilizadas medidas de distância
e ângulo para quantitativamente descrever atributos de um objecto, e fotogrametria 3D,
onde são utilizadas coordenadas de referência de forma a reconstruir a verdadeira forma
de um objeto. Apesar da existência de uma elevada variedade de software no contexto de
fotogrametria 3D, a variedade de software concebida especificamente para a a aplicação
de fotogrametria 2D é ainda muito reduzida. Consequentemente, é comum observar
estudos onde fotogrametria 2D é utilizada através da aquisição manual de medidas a partir
de imagens, que posteriormente necessitam de ser escaladas para um sistema apropriado
de medida. Este processo de várias etapas é frequentemente moroso e requer a aplicação
de diferentes programas de software. Além de ser moroso, é também susceptível a erros,
dada a natureza manual na aquisição de dados. O presente trabalho visou abordar os
problemas descritos através da implementação de um novo software multiplataforma
capaz de integrar e agilizar o processo de fotogrametria presentes em estudos que
requerem fotogrametria 2D. Resultados preliminares demonstram um decréscimo de 45%
em tempo de processamento na utilização do software desenvolvido no âmbito deste
trabalho quando comparado a uma metodologia concorrente. Limitações existentes e
trabalho futuro são discutidos
- …