1,053 research outputs found
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Coded aperture and coded exposure photography : an investigation into applications and methods
This dissertation presents an introduction to the field of computational photography, and provides a survey of recent research. Specific attention is given to coded aperture and coded exposure theory and methods, as these form the basis for the experiments performed
Learning Lens Blur Fields
Optical blur is an inherent property of any lens system and is challenging to
model in modern cameras because of their complex optical elements. To tackle
this challenge, we introduce a high-dimensional neural representation of
blurand a practical method for acquiring
it. The lens blur field is a multilayer perceptron (MLP) designed to (1)
accurately capture variations of the lens 2D point spread function over image
plane location, focus setting and, optionally, depth and (2) represent these
variations parametrically as a single, sensor-specific function. The
representation models the combined effects of defocus, diffraction, aberration,
and accounts for sensor features such as pixel color filters and pixel-specific
micro-lenses. To learn the real-world blur field of a given device, we
formulate a generalized non-blind deconvolution problem that directly optimizes
the MLP weights using a small set of focal stacks as the only input. We also
provide a first-of-its-kind dataset of 5D blur fieldsfor smartphone cameras,
camera bodies equipped with a variety of lenses, etc. Lastly, we show that
acquired 5D blur fields are expressive and accurate enough to reveal, for the
first time, differences in optical behavior of smartphone devices of the same
make and model
Natural images from the birthplace of the human eye
Here we introduce a database of calibrated natural images publicly available
through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we
acquired about 5000 six-megapixel images of Okavango Delta of Botswana, a
tropical savanna habitat similar to where the human eye is thought to have
evolved. Some sequences of images were captured unsystematically while
following a baboon troop, while others were designed to vary a single parameter
such as aperture, object distance, time of day or position on the horizon.
Images are available in the raw RGB format and in grayscale. Images are also
available in units relevant to the physiology of human cone photoreceptors,
where pixel values represent the expected number of photoisomerizations per
second for cones sensitive to long (L), medium (M) and short (S) wavelengths.
This database is distributed under a Creative Commons Attribution-Noncommercial
Unported license to facilitate research in computer vision, psychophysics of
perception, and visual neuroscience.Comment: Submitted to PLoS ON
Integrative IRT for documentation and interpretation of archaeological structures
The documentation of built heritage involves tangible and intangible features. Several morphological and metric aspects of architectural structures are acquired throughout a massive data capture system, such as the Terrestrial Laser Scanner (TLS) and the Structure from Motion (SfM) technique. They produce models that give information about the skin of architectural organism. Infrared Thermography (IRT) is one of the techniques used to investigate what is beyond the external layer. This technology is particularly significant in the diagnostics and conservation of the built heritage. In archaeology, the integration of data acquired through different sensors improves the analysis and the interpretation of findings that are incomplete or transformed.
Starting from a topographic and photogrammetric survey, the procedure here proposed aims to combine the bidimensional IRT data together with the 3D point cloud. This system helps to overcome the Field of View (FoV) of each IRT image and provides a three-dimensional reading of the thermal behaviour of the object. This approach is based on the geometric constraints of the pair of RGB-IR images coming from two different sensors mounted inside a bi-camera commercial device. Knowing the approximate distance between the two sensors, and making the necessary simplifications allowed by the low resolution of the thermal sensor, we projected the colour of the IR images to the RGB point cloud. The procedure was applied is the so-called Nymphaeum of Egeria, an archaeological structure in the Caffarella Park (Rome, Italy), which is currently part of the Appia Antica Regional Park
The stare and chase observation strategy at the Swiss Optical Ground Station and Geodynamics Observatory Zimmerwald: From concept to implementation
A sustainable use of the outer space becomes imperative for preserving current operational missions and
enabling the placement of new space-based technology in the outer space safely. The uncontrolled growing
number of resident space objects (RSO) increases the likelihood of close conjunctions and therefore collisions
that will populate the space environment even more. To prevent such situations, orbit catalogues of RSO are
built and maintained, which are used to assess the collision risk between RSO. In order to keep the catalogues
up-to-date, a worldwide ground-based infrastructure is used to collect observations coming from different
observation techniques.
The current study focuses on the so-called stare and chase observation strategy using an active and passive-
optical system. The final aim is to correct the pointing of the telescope so that the target will be within the
field of view of the laser beam, thus enabling the acquisition of laser ranges. By doing so, objects with poor
ephemerides, available e.g. from Two Line Elements (TLE), will not pose a problem anymore for the rather
small field of view of the laser beam. The system gathers both angular and range measurements, which can be
used for an immediate orbit determination, or improvement, that will enhance the accuracy of the predictions
helping other stations to acquire the target faster and permitting the station to repeat the procedure once
more.
The development of the observation strategy is particularized for the Zimmerwald Laser and Astrometry
Telescope (ZIMLAT), located at the Swiss Optical Ground Station and Geodynamics Observatory Zimmerwald
(SwissOGS), Switzerland. Likewise, all the implemented algorithms were tested using real measurements from
ZIMLAT and the tracking camera
Assessment of RGB vegetation indices to estimate chlorophyll content in sugar beet leaves in the final cultivation stage
Estimation of chlorophyll content with portable meters is an easy way to quantify crop nitrogen status in sugar beet leaves. In this work, an alternative for chlorophyll content estimation using RGB-only vegetation indices has been explored. In a first step, pictures of spring-sown ‘Fernanda KWS’ variety sugar beet leaves taken with a commercial camera were used to calculate 25 RGB indices reported in the literature and to obtain 9 new indices through principal component analysis (PCA) and stepwise linear regression (SLR) techniques. The performance of the 34 indices was examined in order to evaluate their ability to estimate chlorophyll content and chlorophyll degradation in the leaves under different natural light conditions along 4 days of the canopy senescence period. Two of the new proposed RGB indices were found to improve the already good performance of the indices reported in the literature, particularly for leaves featuring low chlorophyll contents. The 4 best indices were finally tested in field conditions, using unmanned aerial vehicle (UAV)-taken photographs of a sugar beet plot, finding a reasonably good agreement with chlorophyll-meter data for all indices, in particular for I2 and (R−B)/(R+G+B). Consequently, the suggested RGB indices may hold promise for inexpensive chlorophyll estimation in sugar beet leaves during the harvest time, although a direct relationship with nitrogen status still needs to be validated
Range Finding with a Plenoptic Camera
The plenoptic camera enables simultaneous collection of imagery and depth information by sampling the 4D light field. The light field is distinguished from data sets collected by stereoscopic systems because it contains images obtained by an N by N grid of apertures, rather than just the two apertures of the stereoscopic system. By adjusting parameters of the camera construction, it is possible to alter the number of these `subaperture images,\u27 often at the cost of spatial resolution within each. This research examines a variety of methods of estimating depth by determining correspondences between subaperture images. A major finding is that the additional \u27apertures\u27 provided by the plenoptic camera do not greatly improve the accuracy of depth estimation. Thus, the best overall performance will be achieved by a design which maximizes spatial resolution at the cost of angular samples. For this reason, it is not surprising that the performance of the plenoptic camera should be comparable to that of a stereoscopic system of similar scale and specifications. As with stereoscopic systems, the plenoptic camera has its most immediate, realistic applications in the domains of robotic navigation and 3D video collection
- …