5,668 research outputs found
The Atmospheric Monitoring System of the JEM-EUSO Space Mission
An Atmospheric Monitoring System (AMS) is a mandatory and key device of a
space-based mission which aims to detect Ultra-High Energy Cosmic Rays (UHECR)
and Extremely-High Energy Cosmic Rays (EHECR) from Space. JEM-EUSO has a
dedicated atmospheric monitoring system that plays a fundamental role in our
understanding of the atmospheric conditions in the Field of View (FoV) of the
telescope. Our AMS consists of a very challenging space infrared camera and a
LIDAR device, that are being fully designed with space qualification to fulfil
the scientific requirements of this space mission. The AMS will provide
information of the cloud cover in the FoV of JEM-EUSO, as well as measurements
of the cloud top altitudes with an accuracy of 500 m and the optical depth
profile of the atmosphere transmittance in the direction of each air shower
with an accuracy of 0.15 degree and a resolution of 500 m. This will ensure
that the energy of the primary UHECR and the depth of maximum development of
the EAS ( Extensive Air Shower) are measured with an accuracy better than 30\%
primary energy and 120 depth of maximum development for EAS occurring
either in clear sky or with the EAS depth of maximum development above
optically thick cloud layers. Moreover a very novel radiometric retrieval
technique considering the LIDAR shots as calibration points, that seems to be
the most promising retrieval algorithm is under development to infer the Cloud
Top Height (CTH) of all kind of clouds, thick and thin clouds in the FoV of the
JEM-EUSO space telescope
A Comparative Analysis of Phytovolume Estimation Methods Based on UAV-Photogrammetry and Multispectral Imagery in a Mediterranean Forest
Management and control operations are crucial for preventing forest fires, especially in Mediterranean forest areas with dry climatic periods. One of them is prescribed fires, in which the biomass fuel present in the controlled plot area must be accurately estimated. The most used methods for estimating biomass are time-consuming and demand too much manpower. Unmanned aerial vehicles (UAVs) carrying multispectral sensors can be used to carry out accurate indirect measurements of terrain and vegetation morphology and their radiometric characteristics. Based on the UAV-photogrammetric project products, four estimators of phytovolume were compared in a Mediterranean forest area, all obtained using the difference between a digital surface model (DSM) and a digital terrain model (DTM). The DSM was derived from a UAV-photogrammetric project based on the structure from a motion algorithm. Four different methods for obtaining a DTM were used based on an unclassified dense point cloud produced through a UAV-photogrammetric project (FFU), an unsupervised classified dense point cloud (FFC), a multispectral vegetation index (FMI), and a cloth simulation filter (FCS). Qualitative and quantitative comparisons determined the ability of the phytovolume estimators for vegetation detection and occupied volume. The results show that there are no significant differences in surface vegetation detection between all the pairwise possible comparisons of the four estimators at a 95% confidence level, but FMI presented the best kappa value (0.678) in an error matrix analysis with reference data obtained from photointerpretation and supervised classification. Concerning the accuracy of phytovolume estimation, only FFU and FFC presented differences higher than two standard deviations in a pairwise comparison, and FMI presented the best RMSE (12.3 m) when the estimators were compared to 768 observed data points grouped in four 500 m2 sample plots. The FMI was the best phytovolume estimator of the four compared for low vegetation height in a Mediterranean forest. The use of FMI based on UAV data provides accurate phytovolume estimations that can be applied on several environment management activities, including wildfire prevention. Multitemporal phytovolume estimations based on FMI could help to model the forest resources evolution in a very realistic way
SR-4000 and CamCube3.0 Time of Flight (ToF) Cameras: Tests and Comparison
In this paper experimental comparisons between two Time-of-Flight (ToF) cameras are reported in order to test their performance and to give some procedures for testing data delivered by this kind of technology. In particular, the SR-4000 camera by Mesa Imaging AG and the CamCube3.0 by PMD Technologies have been evaluated since they have good performances and are well known to researchers dealing with Time-of- Flight (ToF) cameras. After a brief overview of commercial ToF cameras available on the market and the main specifications of the tested devices, two topics are presented in this paper. First, the influence of camera warm-up on distance measurement is analyzed: a warm-up of 40 minutes is suggested to obtain the measurement stability, especially in the case of the CamCube3.0 camera, that exhibits distance measurement variations of several centimeters. Secondly, the variation of distance measurement precision variation over integration time is presented: distance measurement precisions of some millimeters are obtained in both cases. Finally, a comparison between the two cameras based on the experiments and some information about future work on evaluation of sunlight influence on distance measurements are reporte
Camera-Based Distance Sensor
While working on a robotics project at the electrical contracting company for which we work, we discovered a gap in the electronic distance sensor market in terms of range, accuracy, precision, and cost. We designed and constructed a prototype for an electronic distance sensing component which utilizes a camera, laser, and image processor to measure distances. The laser is pointed at a surface and an image of the laser dot is captured. An image processing algorithm determines the pixel position of the dot in the image, and this position is compared to a lookup table of known values to determine the distance to the dot.
In measuring our prototype’s performance, we found that it was capable of measuring distances up to 5 meters with greater than 90% accuracy. We also discuss some possible ways to improve the viability of the technology, including ways to improve the refresh rate as well as the reliability
Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy
With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment
- …