11,978 research outputs found
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Where Should We Place LiDARs on the Autonomous Vehicle? - An Optimal Design Approach
Autonomous vehicle manufacturers recognize that LiDAR provides accurate 3D
views and precise distance measures under highly uncertain driving conditions.
Its practical implementation, however, remains costly. This paper investigates
the optimal LiDAR configuration problem to achieve utility maximization. We use
the perception area and non-detectable subspace to construct the design
procedure as solving a min-max optimization problem and propose a bio-inspired
measure -- volume to surface area ratio (VSR) -- as an easy-to-evaluate cost
function representing the notion of the size of the non-detectable subspaces of
a given configuration. We then adopt a cuboid-based approach to show that the
proposed VSR-based measure is a well-suited proxy for object detection rate. It
is found that the Artificial Bee Colony evolutionary algorithm yields a
tractable cost function computation. Our experiments highlight the
effectiveness of our proposed VSR measure in terms of cost-effectiveness
configuration as well as providing insightful analyses that can improve the
design of AV systems.Comment: 7 pages including the references, accepted by International
Conference on Robotics and Automation (ICRA), 201
Time-of-flight imaging of invisibility cloaks
As invisibility cloaking has recently become experimental reality, it is
interesting to explore ways to reveal remaining imperfections. In essence, the
idea of most invisibility cloaks is to recover the optical path lengths without
an object (to be made invisible) by a suitable arrangement around that object.
Optical path length is proportional to the time of flight of a light ray or to
the optical phase accumulated by a light wave. Thus, time-of-flight images
provide a direct and intuitive tool for probing imperfections. Indeed, recent
phase-sensitive experiments on the carpet cloak have already made early steps
in this direction. In the macroscopic world, time-of-flight images could be
measured directly by light detection and ranging (LIDAR). Here, we show
calculated time-of-flight images of the conformal Gaussian carpet cloak, the
conformal grating cloak, the cylindrical free-space cloak, and of the invisible
sphere. All results are obtained by using a ray-velocity equation of motion
derived from Fermat's principle.Comment: 11 pages, 6 figures, journal pape
Recommended from our members
Virtual viewpoint three-dimensional panorama
Conventional panoramic images are known to provide for an enhanced field of view in which the scene
always has a fixed appearance. The idea presented in this paper focuses on the use of the concept of virtual
viewpoint creation to generate different panoramic images of the same scene with three-dimensional
component. Three-dimensional effect in a resultant panorama is realized by superimposing a stereo-pair of
panoramic images
Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated
Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to
the image of an approaching object. These neurons are called the lobula giant movement
detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the
development of an LGMD model for use as an artificial collision detector in robotic applications.
To date, robots have been equipped with only a single, central artificial LGMD sensor, and this
triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly,
for a robot to behave autonomously, it must react differently to stimuli approaching from
different directions. In this study, we implement a bilateral pair of LGMD models in Khepera
robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD
models using methodologies inspired by research on escape direction control in cockroaches.
Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration,
the khepera robots could escape an approaching threat in real time and with a similar
distribution of escape directions as real locusts. We also found that by optimising these
algorithms, we could use them to integrate the left and right DCMD responses of real jumping
locusts offline and reproduce the actual escape directions that the locusts took in a particular
trial. Our results significantly advance the development of an artificial collision detection and
evasion system based on the locust LGMD by allowing it reactive control over robot behaviour.
The success of this approach may also indicate some important areas to be pursued in future
biological research
3D Scanning System for Automatic High-Resolution Plant Phenotyping
Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing
structures make plants difficult for three-dimensional (3D) scanning and
reconstruction -- two critical steps in automated visual phenotyping. Many
current solutions such as laser scanning, structured light, and multiview
stereo can struggle to acquire usable 3D models because of limitations in
scanning resolution and calibration accuracy. In response, we have developed a
fast, low-cost, 3D scanning platform to image plants on a rotating stage with
two tilting DSLR cameras centred on the plant. This uses new methods of camera
calibration and background removal to achieve high-accuracy 3D reconstruction.
We assessed the system's accuracy using a 3D visual hull reconstruction
algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum
plants and 2 wheat plants across different sets of tilt angles. Scan times
ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes
(to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas
and perimeters of the plastic models were measured manually and compared to
measurements from the scanning system: results were within 3-4% of each other.
The 3D reconstructions obtained with the scanning system show excellent
geometric agreement with all six plant specimens, even plants with thin leaves
and fine stems.Comment: 8 papes, DICTA 201
The Multi-Object, Fiber-Fed Spectrographs for SDSS and the Baryon Oscillation Spectroscopic Survey
We present the design and performance of the multi-object fiber spectrographs
for the Sloan Digital Sky Survey (SDSS) and their upgrade for the Baryon
Oscillation Spectroscopic Survey (BOSS). Originally commissioned in Fall 1999
on the 2.5-m aperture Sloan Telescope at Apache Point Observatory, the
spectrographs produced more than 1.5 million spectra for the SDSS and SDSS-II
surveys, enabling a wide variety of Galactic and extra-galactic science
including the first observation of baryon acoustic oscillations in 2005. The
spectrographs were upgraded in 2009 and are currently in use for BOSS, the
flagship survey of the third-generation SDSS-III project. BOSS will measure
redshifts of 1.35 million massive galaxies to redshift 0.7 and Lyman-alpha
absorption of 160,000 high redshift quasars over 10,000 square degrees of sky,
making percent level measurements of the absolute cosmic distance scale of the
Universe and placing tight constraints on the equation of state of dark energy.
The twin multi-object fiber spectrographs utilize a simple optical layout
with reflective collimators, gratings, all-refractive cameras, and
state-of-the-art CCD detectors to produce hundreds of spectra simultaneously in
two channels over a bandpass covering the near ultraviolet to the near
infrared, with a resolving power R = \lambda/FWHM ~ 2000. Building on proven
heritage, the spectrographs were upgraded for BOSS with volume-phase
holographic gratings and modern CCD detectors, improving the peak throughput by
nearly a factor of two, extending the bandpass to cover 360 < \lambda < 1000
nm, and increasing the number of fibers from 640 to 1000 per exposure. In this
paper we describe the original SDSS spectrograph design and the upgrades
implemented for BOSS, and document the predicted and measured performances.Comment: 43 pages, 42 figures, revised according to referee report and
accepted by AJ. Provides background for the instrument responsible for SDSS
and BOSS spectra. 4th in a series of survey technical papers released in
Summer 2012, including arXiv:1207.7137 (DR9), arXiv:1207.7326 (Spectral
Classification), and arXiv:1208.0022 (BOSS Overview
Robotics Platforms Incorporating Manipulators Having Common Joint Designs
Manipulators in accordance with various embodiments of the invention can be utilized to implement statically stable robots capable of both dexterous manipulation and versatile mobility. Manipulators in accordance with one embodiment of the invention include: an azimuth actuator; three elbow joints that each include two actuators that are offset to allow greater than 360 degree rotation of each joint; a first connecting structure that connects the azimuth actuator and a first of the three elbow joints; a second connecting structure that connects the first elbow joint and a second of the three elbow joints; a third connecting structure that connects the second elbow joint to a third of the three elbow joints; and an end-effector interface connected to the third of the three elbow joints
- …