27 research outputs found

    An economic analysis of a robotic harvest technology in New Zealand fresh apple industry : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Agribusiness, Massey University School of Agriculture and Environment, Manawatu, New Zealand

    Get PDF
    The New Zealand apple industry is predominately an export-oriented industry relying on manual labour throughout the year. In recent years, however, labour shortages for harvesting have been jeopardising its competitiveness and profitability. Temporary immigration labour programs, such as the Recognised Seasonal Employer (RSE) program have not been able to solve the labour shortages, urging the industry to consider use of harvesting automation, i.e. robotic technology, as a solution. Harvesting robots are still in commercial trial stage and no studies have assessed the economic feasibility of such technology. The present study for the first time develops a bio-economic model to analyse the investment decision for adopting harvesting robots compared to available alternatives, i.e. platform and manual harvesting systems, using net present value (NPV) as the method of analysis; for newly established single-, bi-, and multi-varietal orchards across different orchard sizes, and three apple varieties (Envy, Jazz, and Royal Gala); and implications of orchard canopy transition and associated sensitivities are considered. The results of the analysis identified fruit value and yield as the key drivers for the adoption of harvesting automation. For relatively low value and or yielding varieties such as Jazz or Royal Gala, robots are less profitable in single-varietal orchard compared to bi-varietal orchard planted with relatively low value and yielding varieties. In a multi-varietal orchard, a relatively high value and high yield variety, such as Envy, is crucial to compensate for the costs incurred for harvesting other varieties using robots or platforms. The greatest potential benefit of utilising harvesting robots was reducing pickers required by an average of 54% for Envy and 48% for each of Jazz and Royal Gala across all orchard sizes compared to manual harvesting; and 7% in average for each of Envy, Jazz, and Royal Gala across all orchard sizes compared to platform harvesting system. This study also identified the break-even price for a robotic harvester in a single-varietal orchard, showed that the break-even prices exceeded the assumed price of the robot, and are highly variable depending on the varietal value and yield, where Envy as a relatively higher value and yielding variety returns a break-even price of 2.92millioncomparedtorelativelylowervalueandyieldingvarieties,Jazzwith2.92 million compared to relatively lower value and yielding varieties, Jazz with 674,895, and Royal Gala with $689,608. Sensitivity analyses showed that both harvesting speed and efficiency are key parameters in the modelled orchard and positively affected the net returns of the investment and must be considered by researchers and manufacturers. However, for developers and potential adopters of robots, it should be more important that robots operate faster, but not necessarily as more efficient in order to generate a high return while substituting the highest number of pickers and leaving less unharvested fruit on trees in the limited harvesting window. Reducing robot price by 12% and 42% can generate an equivalent level of profit similar to manual or platform harvesting, respectively. Increases in labour wages, and decreases in labour availability and efficiency adversely affected the NPV and profitability outlook of the investment, but NPV was more affected by the decreases in labour efficiency and availability than wage increases. This research has important science and policy implications for policy makers, academics, growers, engineers, and manufacturers. From an economic perspective, for late adopters or those growers who may not be financially able to invest in robots or may be uncertain about their performance, platform harvesting system can be utilised as an alternative solution that is commercially available until robotic harvesting technology improves or becomes more affordable, and commercially available. Alternatively, it may be possible for these orchardists to benefit from utilising the robotic harvester in the form of a co-operative or contract-harvesting business model to avoid the capital costs associated with purchasing and operating the robots. Besides the economic factors, robotic harvesters have the potential to be considered as a solution for non-economic factors such as food safety problems. This is more apparent in the post-Covid-19 pandemic era, which has not only made it more difficult for growers to source their required workers due to border closures, but also has led consumers to be more cautious about food safety when they make purchase decisions and prefer to have their fresh fruit touchless from farm to plate. This may not be a problem for packhouses as most are automated, but it may be an issue for harvesting operations, because pickers have to pick apples by hand. Even though robots cannot be the only option for growers to rely on for the foreseeable future as they are not commercially available, in the current situation robot harvesting may be the most ideal solution

    RegBN: Batch Normalization of Multimodal Data with Regularization

    Full text link
    Recent years have witnessed a surge of interest in integrating high-dimensional data captured by multisource sensors, driven by the impressive success of neural networks in the integration of multimodal data. However, the integration of heterogeneous multimodal data poses a significant challenge, as confounding effects and dependencies among such heterogeneous data sources introduce unwanted variability and bias, leading to suboptimal performance of multimodal models. Therefore, it becomes crucial to normalize the low- or high-level features extracted from data modalities before their fusion takes place. This paper introduces a novel approach for the normalization of multimodal data, called RegBN, that incorporates regularization. RegBN uses the Frobenius norm as a regularizer term to address the side effects of confounders and underlying dependencies among different data sources. The proposed method generalizes well across multiple modalities and eliminates the need for learnable parameters, simplifying training and inference. We validate the effectiveness of RegBN on eight databases from five research areas, encompassing diverse modalities such as language, audio, image, video, depth, tabular, and 3D MRI. The proposed method demonstrates broad applicability across different architectures such as multilayer perceptrons, convolutional neural networks, and vision transformers, enabling effective normalization of both low- and high-level features in multimodal neural networks. RegBN is available at \url{https://github.com/mogvision/regbn}

    FFD:Fast Feature Detector

    Get PDF
    Scale-invariance, good localization and robustness to noise and distortions are the main properties that a local feature detector should possess. Most existing local feature detectors find excessive unstable feature points that increase the number of keypoints to be matched and the computational time of the matching step. In this paper, we show that robust and accurate keypoints exist in the specific scale-space domain. To this end, we first formulate the superimposition problem into a mathematical model and then derive a closed-form solution for multiscale analysis. The model is formulated via difference-of-Gaussian (DoG) kernels in the continuous scale-space domain, and it is proved that setting the scale-space pyramid's blurring ratio and smoothness to 2 and 0.627, respectively, facilitates the detection of reliable keypoints. For the applicability of the proposed model to discrete images, we discretize it using the undecimated wavelet transform and the cubic spline function. Theoretically, the complexity of our method is less than 5\% of that of the popular baseline Scale Invariant Feature Transform (SIFT). Extensive experimental results show the superiority of the proposed feature detector over the existing representative hand-crafted and learning-based techniques in accuracy and computational time. The code and supplementary materials can be found at~{\url{https://github.com/mogvision/FFD}}

    Adversarial Distortion Learning for Medical Image Denoising

    Full text link
    We present a novel adversarial distortion learning (ADL) for denoising two- and three-dimensional (2D/3D) biomedical image data. The proposed ADL consists of two auto-encoders: a denoiser and a discriminator. The denoiser removes noise from input data and the discriminator compares the denoised result to its noise-free counterpart. This process is repeated until the discriminator cannot differentiate the denoised data from the reference. Both the denoiser and the discriminator are built upon a proposed auto-encoder called Efficient-Unet. Efficient-Unet has a light architecture that uses the residual blocks and a novel pyramidal approach in the backbone to efficiently extract and re-use feature maps. During training, the textural information and contrast are controlled by two novel loss functions. The architecture of Efficient-Unet allows generalizing the proposed method to any sort of biomedical data. The 2D version of our network was trained on ImageNet and tested on biomedical datasets whose distribution is completely different from ImageNet; so, there is no need for re-training. Experimental results carried out on magnetic resonance imaging (MRI), dermatoscopy, electron microscopy and X-ray datasets show that the proposed method achieved the best on each benchmark. Our implementation and pre-trained models are available at https://github.com/mogvision/ADL

    No-Clean-Reference Image Super-Resolution: Application to Electron Microscopy

    Full text link
    The inability to acquire clean high-resolution (HR) electron microscopy (EM) images over a large brain tissue volume hampers many neuroscience studies. To address this challenge, we propose a deep-learning-based image super-resolution (SR) approach to computationally reconstruct clean HR 3D-EM with a large field of view (FoV) from noisy low-resolution (LR) acquisition. Our contributions are I) Investigating training with no-clean references for â„“2\ell_2 and â„“1\ell_1 loss functions; II) Introducing a novel network architecture, named EMSR, for enhancing the resolution of LR EM images while reducing inherent noise; and, III) Comparing different training strategies including using acquired LR and HR image pairs, i.e., real pairs with no-clean references contaminated with real corruptions, the pairs of synthetic LR and acquired HR, as well as acquired LR and denoised HR pairs. Experiments with nine brain datasets showed that training with real pairs can produce high-quality super-resolved results, demonstrating the feasibility of training with non-clean references for both loss functions. Additionally, comparable results were observed, both visually and numerically, when employing denoised and noisy references for training. Moreover, utilizing the network trained with synthetically generated LR images from HR counterparts proved effective in yielding satisfactory SR results, even in certain cases, outperforming training with real pairs. The proposed SR network was compared quantitatively and qualitatively with several established SR techniques, showcasing either the superiority or competitiveness of the proposed method in mitigating noise while recovering fine details.Comment: 13 pages, 12 figures, and 2 table

    Self-Supervised Super-Resolution Approach for Isotropic Reconstruction of 3D Electron Microscopy Images from Anisotropic Acquisition

    Full text link
    Three-dimensional electron microscopy (3DEM) is an essential technique to investigate volumetric tissue ultra-structure. Due to technical limitations and high imaging costs, samples are often imaged anisotropically, where resolution in the axial direction (zz) is lower than in the lateral directions (x,y)(x,y). This anisotropy 3DEM can hamper subsequent analysis and visualization tasks. To overcome this limitation, we propose a novel deep-learning (DL)-based self-supervised super-resolution approach that computationally reconstructs isotropic 3DEM from the anisotropic acquisition. The proposed DL-based framework is built upon the U-shape architecture incorporating vision-transformer (ViT) blocks, enabling high-capability learning of local and global multi-scale image dependencies. To train the tailored network, we employ a self-supervised approach. Specifically, we generate pairs of anisotropic and isotropic training datasets from the given anisotropic 3DEM data. By feeding the given anisotropic 3DEM dataset in the trained network through our proposed framework, the isotropic 3DEM is obtained. Importantly, this isotropic reconstruction approach relies solely on the given anisotropic 3DEM dataset and does not require pairs of co-registered anisotropic and isotropic 3DEM training datasets. To evaluate the effectiveness of the proposed method, we conducted experiments using three 3DEM datasets acquired from brain. The experimental results demonstrated that our proposed framework could successfully reconstruct isotropic 3DEM from the anisotropic acquisition

    Orderly Disorder in Point Cloud Domain

    Get PDF
    In the real world, out-of-distribution samples, noise and distortions exist in test data. Existing deep networks developed for point cloud data analysis are prone to overfitting and a partial change in test data leads to unpredictable behaviour of the networks. In this paper, we propose a smart yet simple deep network for analysis of 3D models using `orderly disorder' theory. Orderly disorder is a way of describing the complex structure of disorders within complex systems. Our method extracts the deep patterns inside a 3D object via creating a dynamic link to seek the most stable patterns and at once, throws away the unstable ones. Patterns are more robust to changes in data distribution, especially those that appear in the top layers. Features are extracted via an innovative cloning decomposition technique and then linked to each other to form stable complex patterns. Our model alleviates the vanishing-gradient problem, strengthens dynamic link propagation and substantially reduces the number of parameters. Extensive experiments on challenging benchmark datasets verify the superiority of our light network on the segmentation and classification tasks, especially in the presence of noise wherein our network's performance drops less than 10% while the state-of-the-art networks fail to work

    Remote sensing image fusion via compressive sensing

    Get PDF
    In this paper, we propose a compressive sensing-based method to pan-sharpen the low-resolution multispectral (LRM) data, with the help of high-resolution panchromatic (HRP) data. In order to successfully implement the compressive sensing theory in pan-sharpening, two requirements should be satisfied: (i) forming a comprehensive dictionary in which the estimated coefficient vectors are sparse; and (ii) there is no correlation between the constructed dictionary and the measurement matrix. To fulfill these, we propose two novel strategies. The first is to construct a dictionary that is trained with patches across different image scales. Patches at different scales or equivalently multiscale patches provide texture atoms without requiring any external database or any prior atoms. The redundancy of the dictionary is removed through K-singular value decomposition (K-SVD). Second, we design an iterative l1-l2 minimization algorithm based on alternating direction method of multipliers (ADMM) to seek the sparse coefficient vectors. The proposed algorithm stacks missing high-resolution multispectral (HRM) data with the captured LRM data, so that the latter is used as a constraint for the estimation of the former during the process of seeking the representation coefficients. Three datasets are used to test the performance of the proposed method. A comparative study between the proposed method and several state-of-the-art ones shows its effectiveness in dealing with complex structures of remote sensing imagery
    corecore