333 research outputs found

    Flexible system of multiple RGB-D sensors for measuring and classifying fruits in agri-food Industry

    Get PDF
    The productivity of the agri-food sector experiences continuous and growing challenges that make the use of innovative technologies to maintain and even improve their competitiveness a priority. In this context, this paper presents the foundations and validation of a flexible and portable system capable of obtaining 3D measurements and classifying objects based on color and depth images taken from multiple Kinect v1 sensors. The developed system is applied to the selection and classification of fruits, a common activity in the agri-food industry. Being able to obtain complete and accurate information of the environment, as it integrates the depth information obtained from multiple sensors, this system is capable of self-location and self-calibration of the sensors to then start detecting, classifying and measuring fruits in real time. Unlike other systems that use specific set-up or need a previous calibration, it does not require a predetermined positioning of the sensors, so that it can be adapted to different scenarios. The characterization process considers: classification of fruits, estimation of its volume and the number of assets per each kind of fruit. A requirement for the system is that each sensor must partially share its field of view with at least another sensor. The sensors localize themselves by estimating the rotation and translation matrices that allow to transform the coordinate system of one sensor to the other. To achieve this, Iterative Closest Point (ICP) algorithm is used and subsequently validated with a 6 degree of freedom KUKA robotic arm. Also, a method is implemented to estimate the movement of objects based on the Kalman Filter. A relevant contribution of this work is the detailed analysis and propagation of the errors that affect both the proposed methods and hardware. To determine the performance of the proposed system the passage of different types of fruits on a conveyor belt is emulated by a mobile robot carrying a surface where the fruits were placed. Both the perimeter and volume are measured and classified according to the type of fruit. The system was able to distinguish and classify the 95% of fruits and to estimate their volume with a 85% of accuracy in worst cases (fruits whose shape is not symmetrical) and 94% of accuracy in best cases (fruits whose shape is more symmetrical), showing that the proposed approach can become a useful tool in the agri-food industry.This project has been supported by the National Commission for Science and Technology Research of Chile (Conicyt) under FONDECYT grant 1140575 and the Advanced Center of Electrical and Electronic Engineering - AC3E (CONICYT/FB0008)

    Fuji-SfM dataset: A collection of annotated images and point clouds for Fuji apple detection and location using structure-from-motion photogrammetry

    Get PDF
    The present dataset contains colour images acquired in a commercial Fuji apple orchard (Malus domestica Borkh. cv. Fuji) to reconstruct the 3D model of 11 trees by using structure-from-motion (SfM) photogrammetry. The data provided in this article is related to the research article entitled “Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry” [1]. The Fuji-SfM dataset includes: (1) a set of 288 colour images and the corresponding annotations (apples segmentation masks) for training instance segmentation neural networks such as Mask-RCNN; (2) a set of 582 images defining a motion sequence of the scene which was used to generate the 3D model of 11 Fuji apple trees containing 1455 apples by using SfM; (3) the 3D point cloud of the scanned scene with the corresponding apple positions ground truth in global coordinates. With that, this is the first dataset for fruit detection containing images acquired in a motion sequence to build the 3D model of the scanned trees with SfM and including the corresponding 2D and 3D apple location annotations. This data allows the development, training, and test of fruit detection algorithms either based on RGB images, on coloured point clouds or on the combination of both types of data. Dades primàries associades a l'article http://hdl.handle.net/10459.1/68505This work was partly funded by the Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya (grant 2017 SGR 646), the Spanish Ministry of Economy and Competitiveness (project AGL2013-48297-C2-2-R) and the Spanish Ministry of Science, Innovation and Universities (project RTI2018-094222-B-I00). Part of the work was also developed within the framework of the project TEC2016-75976-R, financed by the Spanish Ministry of Economy, Industry and Competitiveness and the European Regional Development Fund (ERDF). The Spanish Ministry of Education is thanked for Mr. J. Gené’s pre-doctoral fellowships (FPU15/03355)

    Special issue on 'Terrestrial laser scanning': editors' notes

    Get PDF
    In this editorial, we provide an overview of the content of the special issue on 'Terrestrial Laser Scanning'. The aim of this Special Issue is to bring together innovative developments and applications of terrestrial laser scanning (TLS), understood in a broad sense. Thus, although most contributions mainly involve the use of laser-based systems, other alternative technologies that also allow for obtaining 3D point clouds for the measurement and the 3D characterization of terrestrial targets, such as photogrammetry, are also considered. The 15 published contributions are mainly focused on the applications of TLS to the following three topics: TLS performance and point cloud processing, applications to civil engineering, and applications to plant characterization

    KFuji RGB-DS database: Fuji apple multi-modal images for fruit detection with color, depth and range-corrected IR data

    Get PDF
    This article contains data related to the research article entitle 'Multi-modal Deep Learning for Fruit Detection Using RGB-D Cameras and their Radiometric Capabilities' [1]. The development of reliable fruit detection and localization systems is essential for future sustainable agronomic management of high-value crops. RGB-D sensors have shown potential for fruit detection and localization since they provide 3D information with color data. However, the lack of substantial datasets is a barrier for exploiting the use of these sensors. This article presents the KFuji RGBDS database which is composed by 967 multi-modal images of Fuji apples on trees captured using Microsoft Kinect v2 (Microsoft, Redmond, WA, USA). Each image contains information from 3 different modalities: color (RGB), depth (D) and range corrected IR intensity (S). Ground truth fruit locations were manually annotated, labeling a total of 12,839 apples in all the dataset. The current dataset is publicly available at http://www.grap.udl.cat/publicacions/datasets.html.This work was partly funded by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya, the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (ERDF) under Grants 2017 SGR 646, AGL2013-48297-C2-2-R and MALEGRA, TEC2016-75976-R. The Spanish Ministry of Education is thanked for Mr. J. Gené’s pre-doctoral fellowships (FPU15/03355). We would also like to thank Nufri and Vicens Maquinària Agrícola S.A. for their support during data acquisition

    Multi-modal deep learning for Fuji apple detection using RGB-D cameras and their radiometric capabilities

    Get PDF
    Fruit detection and localization will be essential for future agronomic management of fruit crops, with applications in yield prediction, yield mapping and automated harvesting. RGB-D cameras are promising sensors for fruit detection given that they provide geometrical information with color data. Some of these sensors work on the principle of time-of-flight (ToF) and, besides color and depth, provide the backscatter signal intensity. However, this radiometric capability has not been exploited for fruit detection applications. This work presents the KFuji RGB-DS database, composed of 967 multi-modal images containing a total of 12,839 Fuji apples. Compilation of the database allowed a study of the usefulness of fusing RGB-D and radiometric information obtained with Kinect v2 for fruit detection. To do so, the signal intensity was range corrected to overcome signal attenuation, obtaining an image that was proportional to the reflectance of the scene. A registration between RGB, depth and intensity images was then carried out. The Faster R-CNN model was adapted for use with five-channel input images: color (RGB), depth (D) and range-corrected intensity signal (S). Results show an improvement of 4.46% in F1-score when adding depth and range-corrected intensity channels, obtaining an F1-score of 0.898 and an AP of 94.8% when all channels are used. From our experimental results, it can be concluded that the radiometric capabilities of ToF sensors give valuable information for fruit detection.This work was partly funded by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya, the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (ERDF) under Grants 2017SGR 646, AGL2013-48297-C2-2-R and MALEGRA, TEC2016-75976-R. The Spanish Ministry of Education is thanked for Mr. J. Gené’s predoctoral fellowships (FPU15/03355). We would also like to thank Nufri and Vicens Maquinària Agrícola S.A. for their support during data acquisition, and Adria Carbó for his assistance in Faster R-CNN implementation

    Beam formulation and FE framework for architected structures under finite deformations

    Get PDF
    The breakthrough in additive manufacturing (AM) techniques is opening new routes into the conceptualisation of novel architected materials. However, there are still important roadblocks impeding the full implementation of these technologies in different application fields such as soft robotics or bioengineering. One of the main bottlenecks is the difficulty to perform topological optimisation of the structures and their functional design. To help this endeavour, computational models are essential. Although 3D formulations provide the most reliable tools, these usually present very high computational costs. Beam models based on 1D formulations may overcome this limitation but they need to incorporate all the relevant mechanical features of the 3D problem. Here, we propose a mixed formulation for Timoshenko-type beams to consistently account for axial, shear and bending contributions under finite deformation theory. The framework is formulated on general bases and is suitable for most types of materials, allowing for the straightforward particularisation of the constitutive description. To prove validity of the model, we provide original experimental data on a 3D printed elastomeric material. We first validate the computational framework using a benchmark problem and compare the beam formulation predictions with numerical results from an equivalent 3D model. Then, we further validate the framework and illustrate its flexibility to predict the mechanical response of beam-based structures. To this end, we perform original experiments and numerical simulations on two types of relevant structures: a rhomboid lattice and a bi-stable beam structure. In both cases, the numerical results provide a very good agreement with the experiments by means of both quantitative and qualitative results. Overall, the proposed formulation provides a useful tool to help at designing new architected materials and metamaterial structures based on beam components. The framework presented may open new opportunities to guide AM techniques by feeding machine learning optimisation algorithms.The authors acknowledge support from Ministerio de Ciencia e Innovacion MCIN/AEI/10.13039/501100011033 under Grant number PID2020-117894GA-I00, and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 947723, project: 4D-BIOMAP). DGG acknowledges support from the Talent Attraction grant (CM 2018 - 2018-T2/IND-9992) from the Comunidad de Madrid. JAR acknowledges support from the Programa de Apoyo a la Realización de Proyectos Interdiscisplinares de I + D para Jóvenes Investigadores de la Universidad Carlos III de Madrid and Comunidad de Madrid, Spain (project: OPTIMUM)
    corecore