15 research outputs found

    Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention

    Full text link
    [EN] Featured Application Natural interface to enhance human-robot interactions. The aim is to improve robot perception skills. Robot perception skills contribute to natural interfaces that enhance human-robot interactions. This can be notably improved by using convolutional neural networks. To train a convolutional neural network, the labelling process is the crucial first stage, in which image objects are marked with rectangles or masks. There are many image-labelling tools, but all require human interaction to achieve good results. Manual image labelling with rectangles or masks is labor-intensive and unappealing work, which can take months to complete, making the labelling task tedious and lengthy. This paper proposes a fast method to create labelled images with minimal human intervention, which is tested with a robot perception task. Images of objects taken with specific backgrounds are quickly and accurately labelled with rectangles or masks. In a second step, detected objects can be synthesized with different backgrounds to improve the training capabilities of the image set. Experimental results show the effectiveness of this method with an example of human-robot interaction using hand fingers. This labelling method generates a database to train convolutional networks to detect hand fingers easily with minimal labelling work. This labelling method can be applied to new image sets or used to add new samples to existing labelled image sets of any application. This proposed method improves the labelling process noticeably and reduces the time required to start the training process of a convolutional neural network model.The Universitat Politecnica de Valencia has financed the open access fees of this paper with the project number 20200676 (Microinspeccion de superficies).Ricolfe Viala, C.; Blanes Campos, C. (2022). Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention. Applied Sciences. 12(3):1-14. https://doi.org/10.3390/app1203155711412

    Imaging White Blood Cells using a Snapshot Hyper-Spectral Imaging System

    Get PDF
    Automated white blood cell (WBC) counting systems process an extracted whole blood sample and provide a cell count. A step that would not be ideal for onsite screening of individuals in triage or at a security gate. Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering co-registered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, specifically the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained and sealed blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera as a platform to build an automated blood cell counting system. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyperspectral datacube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells\u27 features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. The system has shown to successfully segment blood cells based on their spectral-spatial information. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting

    An Approach for the Customized High-Dimensional Segmentation of Remote Sensing Hyperspectral Images

    Get PDF
    Abstract: This paper addresses three problems in the field of hyperspectral image segmentation: the fact that the way an image must be segmented is related to what the user requires and the application; the lack and cost of appropriately labeled reference images; and, finally, the information loss problem that arises in many algorithms when high dimensional images are projected onto lower dimensional spaces before starting the segmentation process. To address these issues, the Multi-Gradient based Cellular Automaton (MGCA) structure is proposed to segment multidimensional images without projecting them to lower dimensional spaces. The MGCA structure is coupled with an evolutionary algorithm (ECAS-II) in order to produce the transition rule sets required by MGCA segmenters. These sets are customized to specific segmentation needs as a function of a set of low dimensional training images in which the user expresses his segmentation requirements. Constructing high dimensional image segmenters from low dimensional training sets alleviates the problem of lack of labeled training images. These can be generated online based on a parametrization of the desired segmentation extracted from a set of examples. The strategy has been tested in experiments carried out using synthetic and real hyperspectral images, and it has been compared to state-of-the-art segmentation approaches over benchmark images in the area of remote sensing hyperspectral imaging.Ministerio de Economía y competitividad; TIN2015-63646-C5-1-RMinisterio de Economía y competitividad; RTI2018-101114-B-I00Xunta de Galicia: ED431C 2017/1

    Deep learning for remote sensing image classification:A survey

    Get PDF
    Remote sensing (RS) image classification plays an important role in the earth observation technology using RS data, having been widely exploited in both military and civil fields. However, due to the characteristics of RS data such as high dimensionality and relatively small amounts of labeled samples available, performing RS image classification faces great scientific and practical challenges. In recent years, as new deep learning (DL) techniques emerge, approaches to RS image classification with DL have achieved significant breakthroughs, offering novel opportunities for the research and development of RS image classification. In this paper, a brief overview of typical DL models is presented first. This is followed by a systematic review of pixel?wise and scene?wise RS image classification approaches that are based on the use of DL. A comparative analysis regarding the performances of typical DL?based RS methods is also provided. Finally, the challenges and potential directions for further research are discussedpublishersversionPeer reviewe

    A Review on Tomato Leaf Disease Detection using Deep Learning Approaches

    Get PDF
    Agriculture is one of the major sectors that influence the India economy due to the huge population and ever-growing food demand. Identification of diseases that affect the low yield in food crops plays a major role to improve the yield of a crop. India holds the world's second-largest share of tomato production. Unfortunately, tomato plants are vulnerable to various diseases due to factors such as climate change, heavy rainfall, soil conditions, pesticides, and animals. A significant number of studies have examined the potential of deep learning techniques to combat the leaf disease in tomatoes in the last decade. However, despite the range of applications, several gaps within tomato leaf disease detection are yet to be addressed to support the tomato leaf disease diagnosis. Thus, there is a need to create an information base of existing approaches and identify the challenges and opportunities to help advance the development of tools that address the needs of tomato farmers. The review is focussed on providing a detailed assessment and considerations for developing deep learning-based Convolutional Neural Networks (CNNs) architectures like Dense Net, ResNet, VGG Net, Google Net, Alex Net, and LeNet that are applied to detect the disease in tomato leaves to identify 10 classes of diseases affecting tomato plant leaves, with distinct trained disease datasets. The performance of architecture studies using the data from plantvillage dataset, which includes healthy and diseased classes, with the assistance of several different architectural designs. This paper helps to address the existing research gaps by guiding further development and application of tools to support tomato leaves disease diagnosis and provide disease management support to farmers in improving the crop
    corecore