4 research outputs found

    Utilizing Real-Time Image Processing for Monitoring Bacterial Cellulose Formation During Fermentation

    Get PDF
    In general, nata is a bacterial cellulose results from bacterial fermentation of Gluconacetobacter xylinus. During the fermentation process, bacterial cellulose accumulates on the surface of the medium and is eventually visible. The parameter of the end of the fermentation process is indicated by the formation of bacterial cellulose sheets with a certain thickness. During this time, the determination of the success of the fermentation process is done by direct observation of the thickness formed. In this way, the failure of the fermentation process cannot be detected early. Real-time monitoring during the fermentation period will be very helpful to monitor the speed of bacterial cellulose formation and early failure detection of the fermentation process. Currently, image processing has been widely used for various purposes. This study describes how to utilize image processing to monitor bacterial cellulose formation during the fermentation process. For this reason, it is necessary to modify the fermentor by making an area to shoot and follow any thickness increase in the bacterial cellulose, as well as painting the fermentor in dark color to better contrast with the bacterial cellulose color. The device used is the Raspberry Pi, which has been connected to a web camera. Once the image has been captured, it is then processed to calculate the thickness, after which the thickness data are sent to the database

    Biologically inspired robotic perception-action for soft fruit harvesting in vertical growing environments

    Get PDF
    Multiple interlinked factors like demographics, migration patterns, and economics are presently leading to the critical shortage of labour available for low-skilled, physically demanding tasks like soft fruit harvesting. This paper presents a biomimetic robotic solution covering the full ‘Perception-Action’ loop targeting harvesting of strawberries in a state-of-the-art vertical growing environment. The novelty emerges from both dealing with crop/environment variance as well as configuring the robot action system to deal with a range of runtime task constraints. Unlike the commonly used deep neural networks, the proposed perception system uses conditional Generative Adversarial Networks to identify the ripe fruit using synthetic data. The network can effectively train the synthetic data using the image-to-image translation concept, thereby avoiding the tedious work of collecting and labelling the real dataset. Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, our platform’s action system can coordinate the arm to reach/cut the stem using the Passive Motion Paradigm framework inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. While this article focuses on strawberry harvesting, ongoing research towards adaptation of the architecture to other crops such as tomatoes and sweet peppers is briefly described

    Implementación de un sistema de detección por visión artificial en la etapa de recolección del cultivo de fresas

    Get PDF
    Implementar un sistema de detección y categorización con visión artificial en el proceso de cosecha del cultivo de fresas en un cultivo local de la región norte de Ecuador.En el presente trabajo se implementa un sistema de detección y categorización con visión artificial en el proceso de cosecha del cultivo de fresas en la región norte de Ecuador, específicamente en una comunidad de la parroquia Andrade Marín. Para ello se realizó una recopilación de fotos de las fresas en el cultivo, generando un dataset en el cual se aplicó redes neuronales con un modelo de entrenamiento “MobilenetV2”, para posteriormente cumplir con un procesamiento de imágenes en el cual se determina el centroide de la fruta por medio de programación gráfica. El funcionamiento del sistema de visión artificial inicia con la detención gráfica de la fresa, obteniendo su centroide y enviando las diferentes coordenadas de cada una de las fresas con un porcentaje de madurez mayor al 60%. Se define la primera coordenada como la más cercana al borde izquierdo de la imagen, y se envía este dato por comunicación serial a un dispositivo embebido que requiera esta información, y posteriormente se espera la respuesta del dispositivo, el cual informa que la fresa ya ha sido cosechada; este procedimiento se dará repetitivamente hasta que no existan más fresas en cuadro de la imagen. Como resultado se obtuvo un algoritmo con una perdida mínima del 1,7705599 y una velocidad de detección de 0,123 segundos, con una exactitud del 82,91%, precisión de un 80%, sensibilidad del 87% y especificidad del 79%, lo cual lo cataloga como un algoritmo de visión artificial confiable, que contribuirá a la detección de fresas en tiempo real.Maestrí
    corecore