14 research outputs found
DODA: Diffusion for Object-detection Domain Adaptation in Agriculture
The diverse and high-quality content generated by recent generative models
demonstrates the great potential of using synthetic data to train downstream
models. However, in vision, especially in objection detection, related areas
are not fully explored, the synthetic images are merely used to balance the
long tails of existing datasets, and the accuracy of the generated labels is
low, the full potential of generative models has not been exploited. In this
paper, we propose DODA, a data synthesizer that can generate high-quality
object detection data for new domains in agriculture. Specifically, we improve
the controllability of layout-to-image through encoding layout as an image,
thereby improving the quality of labels, and use a visual encoder to provide
visual clues for the diffusion model to decouple visual features from the
diffusion model, and empowering the model the ability to generate data in new
domains. On the Global Wheat Head Detection (GWHD) Dataset, which is the
largest dataset in agriculture and contains diverse domains, using the data
synthesized by DODA improves the performance of the object detector by
12.74-17.76 AP in the domain that was significantly shifted from the
training data
Global Wheat Head Detection 2021: an improved dataset for benchmarking wheat head detection methods
The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version
Use of scanning devices for object 3D reconstruction by photogrammetry and visualization in virtual reality
This article aims to compare two different scanning devices (360 camera and digital single lens reflex (DSLR) camera) and their properties in the three-dimensional (3D) reconstruction of the object by the photogrammetry method. The article first describes the various stages of the process of 3D modeling and reconstruction of the object. A point cloud generated to the 3D model of the object, including textures, is created in the following steps. The scanning devices are compared under the same conditions and time from capturing the image of a real object to its 3D reconstruction. The attributes of the scanned image of the recon-structed 3D model, which is a mandarin tree in a citrus greenhouse in a daylight environment, are also compared. Both created models are also compared visually. That visual comparison reveals the possibilities in the application of both scanning devices can be found in the process of 3D reconstruction of the object by photogrammetry. The results of this research can be applied in the field of 3D modeling of a real object using 3D models in virtual reality, 3D printing, 3D visualization, image analysis, and 3D online presentation. © 2023, Institute of Advanced Engineering and Science. All rights reserved.IGA/CebiaTech/2022/004; Tomas Bata University in Zlin, TBU; University of Toky
Optimization of UWB indoor positioning based on hardware accelerated Fuzzy ISODATA
Abstract With the development of wireless communication technology, Ultra-Wideband (UWB) has become an important solution for indoor positioning. In complex indoor environments, the influence of non-line-of-sight (NLOS) factors leads to increased positioning errors. To improve the positioning accuracy, fuzzy iterative self-organizing data analysis clustering algorithm (ISODATA) is introduced to process a large amount of UWB data to reduce the influence of NLOS factors, and to stabilize positioning error within 2 cm, enhances the accuracy of the positioning system. To further improve the running efficiency of the algorithm, FPGA is used to accelerate the key computational part of the algorithm, compared with running on the MATLAB platform, which improves the speed about 100 times, enhances the algorithm’s computational speed dramatically
Tree Branch Skeleton Extraction from Drone-Based Photogrammetric Point Cloud
Calculating the complex 3D traits of trees such as branch structure using drones/unmanned aerial vehicles (UAVs) with onboard RGB cameras is challenging because extracting branch skeletons from such image-generated sparse point clouds remains difficult. This paper proposes a skeleton extraction algorithm for the sparse point cloud generated by UAV RGB images with photogrammetry. We conducted a comparison experiment by flying a UAV from two altitudes (50 m and 20 m) above a university orchard with several fruit tree species and developed three metrics, namely the F1-score of bifurcation point (FBP), the F1-score of end point (FEP), and the Hausdorff distance (HD) to evaluate the performance of the proposed algorithm. The results show that the average values of FBP, FEP, and HD for the point cloud of fruit tree branches collected at 50 m altitude were 64.15%, 69.94%, and 0.0699, respectively, and those at 20 m were 83.24%, 84.66%, and 0.0474, respectively. This paper provides a branch skeleton extraction method for low-cost 3D digital management of orchards, which can effectively extract the main skeleton from the sparse fruit tree branch point cloud, can assist in analyzing the growth state of different types of fruit trees, and has certain practical application value in the management of orchards
3D reconstruction of a group of plants by the ground multi-image photogrammetry method
This paper discusses the use of the terrestrial multi-frame photogrammetry method. This method is applied to a group of plants in a natural environment of a greenhouse. Image data is captured by two sensing devices with different types of lenses. Image data is used for the realistic 3D reconstruction of objects into a 3D model. In this experiment, we work primarily with the created point cloud.Grand Agency of Tomas Bata University in Zlin [IGA/CebiaTech/2023/003]; Department of Security Engineering, Faculty of Applied Informatics; Tomas Bata University in Zli
Drone-Based Harvest Data Prediction Can Reduce On-Farm Food Loss and Improve Farmer Income
On-farm food loss (i.e., grade-out vegetables) is a difficult challenge in sustainable agricultural systems. The simplest method to reduce the number of grade-out vegetables is to monitor and predict the size of all individuals in the vegetable field and determine the optimal harvest date with the smallest grade-out number and highest profit, which is not cost-effective by conventional methods. Here, we developed a full pipeline to accurately estimate and predict every broccoli head size (n > 3,000) automatically and nondestructively using drone remote sensing and image analysis. The individual sizes were fed to the temperature-based growth model and predicted the optimal harvesting date. Two years of field experiments revealed that our pipeline successfully estimated and predicted the head size of all broccolis with high accuracy. We also found that a deviation of only 1 to 2 days from the optimal date can considerably increase grade-out and reduce farmer's profits. This is an unequivocal demonstration of the utility of these approaches to economic crop optimization and minimization of food losses
High-dimensional topographic organization of visual features in the primate temporal lobe
Abstract The inferotemporal cortex supports our supreme object recognition ability. Numerous studies have been conducted to elucidate the functional organization of this brain area, but there are still important questions that remain unanswered, including how this organization differs between humans and non-human primates. Here, we use deep neural networks trained on object categorization to construct a 25-dimensional space of visual features, and systematically measure the spatial organization of feature preference in both male monkey brains and human brains using fMRI. These feature maps allow us to predict the selectivity of a previously unknown region in monkey brains, which is corroborated by additional fMRI and electrophysiology experiments. These maps also enable quantitative analyses of the topographic organization of the temporal lobe, demonstrating the existence of a pair of orthogonal gradients that differ in spatial scale and revealing significant differences in the functional organization of high-level visual areas between monkey and human brains