607 research outputs found

    A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images

    Full text link
    [EN] This work describes a new hybrid method for accurate iris segmentation from full-face images independently of the ethnicity of the subject. It is based on a combination of three methods: facial key-point detection, integro-differential operator (IDO) and mathematical morphology. First, facial landmarks are extracted by means of the Chehra algorithm in order to obtain the eye location. Then, the IDO is applied to the extracted sub-image containing only the eye in order to locate the iris. Once the iris is located, a series of mathematical morphological operations is performed in order to accurately segment it. Results are obtained and compared among four different ethnicities (Asian, Black, Latino and White) as well as with two other iris segmentation algorithms. In addition, robustness against rotation, blurring and noise is also assessed. Our method obtains state-of-the-art performance and shows itself robust with small amounts of blur, noise and/or rotation. Furthermore, it is fast, accurate, and its code is publicly available.Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Diego-Mas, JA.; Alcañiz Raya, ML. (2019). A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images. EURASIP Journal on Image and Video Processing (Online). 2019(1):1-14. https://doi.org/10.1186/s13640-019-0473-0S11420191A. Radman, K. Jumari, N. Zainal, Fast and reliable iris segmentation algorithm. IET Image Process.7(1), 42–49 (2013).M. Erbilek, M. Fairhurst, M. C. D. C Abreu, in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013). Age prediction from iris biometrics (London, 2013), pp. 1–5. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913712&isnumber=6867223 .A. Abbasi, M. Khan, Iris-pupil thickness based method for determining age group of a person. Int. Arab J. Inf. Technol. (IAJIT). 13(6) (2016).G. Mabuza-Hocquet, F. Nelwamondo, T. Marwala, in Intelligent Information and Database Systems. ed. by N. Nguyen, S. Tojo, L. Nguyen, B. Trawiński. Ethnicity Distinctiveness Through Iris Texture Features Using Gabor Filters. ACIIDS 2017. Lecture Notes in Computer Science, vol. 10192 (Springer, Cham, 2017).S. Lagree, K. W. Bowyer, in 2011 IEEE International Conference on Technologies for Homeland Security (HST). Predicting ethnicity and gender from iris texture (IEEEWaltham, 2011). p. 440–445. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6107909&isnumber=6107829 .J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993).N. Kourkoumelis, M. Tzaphlidou. Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics, eds. A. Kumar, D. Zhang. Ethics and Policy of Biometrics. ICEB 2010. Lecture Notes in Computer Science, vol 6005 (Springer, Berlin, Heidelberg, 2010).R. P. Wildes, Iris recognition: an emerging biometric technology. Proc. IEEE. 85(9), 1348–1363 (1997).M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Comput. Vision. 1(4), 321–331 (1988).S. J. Pundlik, D. L. Woodard, S. T. Birchfield, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Non-ideal iris segmentation using graph cuts (IEEEAnchorage, 2008). p. 1–6. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4563108&isnumber=4562948 .H. Proença, Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell.32(8), 1502–1516 (2010). http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5156505&isnumber=5487331 .T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition. Image Vision Comput.28(2), 223–230 (2010).C. -W. Tan, A. Kumar, in CVPR 2011 WORKSHOPS. Automated segmentation of iris images using visible wavelength face images (Colorado Springs, 2011). p. 9–14. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5981682&isnumber=5981671 .Y. -H. Li, M. Savvides, An automatic iris occlusion estimation method based on high-dimensional density estimation. IEEE Trans. Pattern Anal. Mach. Intell.35(4), 784–796 (2013).M. Yahiaoui, E. Monfrini, B. Dorizzi, Markov chains for unsupervised segmentation of degraded nir iris images for person recognition. Pattern Recogn. Lett.82:, 116–123 (2016).A. Radman, N. Zainal, S. A. Suandi, Automated segmentation of iris images acquired in an unconstrained environment using hog-svm and growcut. Digit. Signal Proc.64:, 60–70 (2017).N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, T. Tan, in 2016 International Conference on Biometrics (ICB). Accurate iris segmentation in non-cooperative environments using fully convolutional networks (Halmstad, 2016). p. 1–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550055&isnumber=7550036 .Z. Zhao, A. Kumar, in 2017 IEEE International Conference on Computer Vision (ICCV). Towards more accurate iris recognition using deeply learned spatially corresponding features (Venice, 2017). p. 3829–3838. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237673&isnumber=8237262 .P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in very noisy iris images. Image Vision Comput.28(2), 246–253 (2010).D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. -K. Park, J. Kim, A new iris segmentation method for non-ideal iris images. Image Vision Comput.28(2), 254–260 (2010).Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vision Comput. 28(2), 261–269 (2010).Z. Zhao, A. Kumar, in 2015 IEEE International Conference on Computer Vision (ICCV). An accurate iris segmentation framework under relaxed imaging constraints using total variation model (Santiago, 2015). p. 3828–3836. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410793&isnumber=7410356 .Y. Hu, K. Sirlantzis, G. Howells, Improving colour iris segmentation using a model selection technique. Pattern Recogn. Lett.57:, 24–32 (2015).E. Ouabida, A. Essadique, A. Bouzid, Vander lugt correlator based active contours for iris segmentation and tracking. Expert Systems Appl.71:, 383–395 (2017).C. -W. Tan, A. Kumar, Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Proc.21(9), 4068–4079 (2012).C. -W. Tan, A. Kumar, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Human identification from at-a-distance images by simultaneously exploiting iris and periocular features (Tsukuba, 2012). p. 553–556. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6460194&isnumber=6460043 .C. -W. Tan, A. Kumar, Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Proc.22(10), 3751–3765 (2013).K. Y. Shin, Y. G. Kim, K. R. Park, Enhanced iris recognition method based on multi-unit iris images. Opt. Eng.52(4), 047201–047201 (2013).CASIA iris databases. http://biometrics.idealtest.org/ . Accessed 06 Sept 2017.WVU iris databases. hhttp://biic.wvu.edu/data-sets/synthetic-iris-dataset . Accessed 06 Sept 2017.UBIRIS iris database. http://iris.di.ubi.pt . Accessed 06 Sept 2017.MICHE iris database. http://biplab.unisa.it/MICHE/ . Accessed 06 Sept 2017.P. J. Phillips, et al, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1. Overview of the face recognition grand challenge (San Diego, 2005). p. 947–954. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467368&isnumber=31472 .D. S. Ma, J. Correll, B. Wittenbrink, The chicago face database: A free stimulus set of faces and norming data. Behav. Res. Methods. 47(4), 1122–1135 (2015).P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 2013).A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, Inc., Englewood Cliffs, 1989).J. Daugman, How iris recognition works. IEEE Trans. Circ. Syst. Video Technol.14(1), 21–30 (2004).A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Incremental face alignment in the wild (Columbus, 2014). p. 1859–1866. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636&isnumber=6909393 .T. Baltrusaitis, P. Robinson, L. -P. Morency, in 2013 IEEE International Conference on Computer Vision Workshops. Constrained local neural fields for robust facial landmark detection in the wild (Sydney, 2013). p. 354–361. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919&isnumber=6755862 .X. Zhu, D. Ramanan, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Face detection, pose estimation, and landmark localization in the wild (IEEEBerlin Heidelberg, 2012), pp. 2879–2886.G. Tzimiropoulos, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Project-out cascaded regression with an application to face alignment (Boston, 2015). p. 3659–3667. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989&isnumber=7298593 .H. Hofbauer, F. Alonso-Fernandez, P. Wild, J. Bigun, A. Uhl, in 2014 22nd International Conference on Pattern Recognition. A ground truth for iris segmentation (Stockholm, 2014). p. 527–532. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976811&isnumber=6976709 .H. Proença, L. A. Alexandre, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. The NICE.I: Noisy Iris Challenge Evaluation - Part I (Crystal City, 2007). p. 1–4. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4401910&isnumber=4401902 .J. Daugman, in European Convention on Security and Detection. High confidence recognition of persons by rapid video analysis of iris texture, (1995). p. 244–251. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=491729&isnumber=10615 .Code of Matlab implementation of Daugman’s integro-differential operator (IDO). https://es.mathworks.com/matlabcentral/fileexchange/15652-iris-segmentation-using-daugman-s-integrodifferential-operator/ . Accessed 06 Sept 2017.Code of Matlab implementation of Zhao and Kumar’s iris segmentation framework under relaxed imaging constraints using total variation model. http://www4.comp.polyu.edu.hk/~csajaykr/tvmiris.htm/ . Accessed 06 Sept 2017.Code of Matlab implementation of presented work. https://gitlab.com/ffuentes/hybrid_iris_segmentation/ . Accessed 06 Sept 2017.Face and eye detection with OpenCV. https://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html . Accessed 07 Sept 2018.A. K. Boyat, B. K. Joshi, 6. A review paper:noise models in digital image processing signal & image processing. An International Journal (SIPIJ), (2015), pp. 63–75. https://doi.org/10.5121/sipij.2015.6206 .A. Buades, Y. Lou, J. M. Morel, Z. Tang, Multi image noise estimation and denoising (2010). Available: https://hal.archives-ouvertes.fr/hal-00510866/

    Deep Semantic Segmentation of Natural and Medical Images: A Review

    Full text link
    The semantic image segmentation task consists of classifying each pixel of an image into an instance, where each instance corresponds to a class. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. In this review, we categorize the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis-based, loss function-based, sequenced models, weakly supervised, and multi-task methods and provide a comprehensive review of the contributions in each of these groups. Further, for each group, we analyze each variant of these groups and discuss the limitations of the current approaches and present potential future research directions for semantic image segmentation.Comment: 45 pages, 16 figures. Accepted for publication in Springer Artificial Intelligence Revie

    QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios

    Get PDF
    The concerns about individuals security have justified the increasing number of surveillance cameras deployed both in private and public spaces. However, contrary to popular belief, these devices are in most cases used solely for recording, instead of feeding intelligent analysis processes capable of extracting information about the observed individuals. Thus, even though video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant details about the subjects that took part in a crime depends on the manual inspection of recordings. As such, the current goal of the research community is the development of automated surveillance systems capable of monitoring and identifying subjects in surveillance scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at designing a visual surveillance system capable of acquiring biometric data at a distance (e.g., face, iris or gait) without requiring human intervention in the process, as well as devising biometric recognition methods robust to the degradation factors resulting from the unconstrained acquisition process. Regarding the first goal, the analysis of the data acquired by typical surveillance systems shows that large acquisition distances significantly decrease the resolution of biometric samples, and thus their discriminability is not sufficient for recognition purposes. In the literature, diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring high-resolution imagery at a distance, particularly when using a master-slave configuration. In the master-slave configuration, the video acquired by a typical surveillance camera is analyzed for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged at high-resolution by the PTZ camera. Several methods have already shown that this configuration can be used for acquiring biometric data at a distance. Nevertheless, these methods failed at providing effective solutions to the typical challenges of this strategy, restraining its use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development of a biometric data acquisition system based on the cooperation of a PTZ camera with a typical surveillance camera. The first proposal is a camera calibration method capable of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ camera. The second proposal is a camera scheduling method for determining - in real-time - the sequence of acquisitions that maximizes the number of different targets obtained, while minimizing the cumulative transition time. In order to achieve the first goal of this thesis, both methods were combined with state-of-the-art approaches of the human monitoring field to develop a fully automated surveillance capable of acquiring biometric data at a distance and without human cooperation, designated as QUIS-CAMPI system. The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis of the performance of the state-of-the-art biometric recognition approaches shows that these approaches attain almost ideal recognition rates in unconstrained data. However, this performance is incongruous with the recognition rates observed in surveillance scenarios. Taking into account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a distance ranging from 5 to 40 meters and without human intervention in the acquisition process. This set allows to objectively assess the performance of state-of-the-art biometric recognition methods in data that truly encompass the covariates of surveillance scenarios. As such, this set was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained by the nine methods specially designed for this competition. In addition, the data acquired by the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the development of methods robust to the covariates of surveillance scenarios. The first proposal regards a method for detecting corrupted features in biometric signatures inferred by a redundancy analysis algorithm. The second proposal is a caricature-based face recognition approach capable of enhancing the recognition performance by automatically generating a caricature from a 2D photo. The experimental evaluation of these methods shows that both approaches contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivíduos tem justificado o crescimento do número de câmaras de vídeo-vigilância instaladas tanto em espaços privados como públicos. Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente capaz de inferir em tempo real informações sobre os indivíduos observados. Assim, apesar de a vídeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda confinado à disponibilização de vídeos que têm que ser manualmente inspecionados para extrair informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal desafio da comunidade científica é o desenvolvimento de sistemas automatizados capazes de monitorizar e identificar indivíduos em ambientes de vídeo-vigilância. Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento biométrico aos ambientes de vídeo-vigilância. De forma mais especifica, pretende-se 1) conceber um sistema de vídeo-vigilância que consiga adquirir dados biométricos a longas distâncias (e.g., imagens da cara, íris, ou vídeos do tipo de passo) sem requerer a cooperação dos indivíduos no processo; e 2) desenvolver métodos de reconhecimento biométrico robustos aos fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas. No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas típicos de vídeo-vigilância mostra que, devido à distância de captura, os traços biométricos amostrados não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis. Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave. Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse (e.g. carros, pessoas) a partir do vídeo adquirido por uma câmara de vídeo-vigilância e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados com esta estratégia, impedindo assim o seu uso em ambientes de vídeo-vigilância. Deste modo, esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de vídeo-vigilância usando uma câmara PTZ assistida por uma câmara típica de vídeo-vigilância. O primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara master para o ângulo da câmara PTZ (slave) sem o auxílio de outros dispositivos óticos. O segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações que maximiza o número de diferentes sujeitos observados e simultaneamente minimiza o tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os dois métodos propostos foram combinados com os avanços alcançados na área da monitorização de humanos para assim desenvolver o primeiro sistema de vídeo-vigilância completamente automatizado e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação dos indivíduos no processo, designado por sistema QUIS-CAMPI. O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho não é corroborado pelos resultados observados em ambientes de vídeo-vigilância, o que sugere que os conjuntos de dados atuais não contêm verdadeiramente os fatores de degradação típicos dos ambientes de vídeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biométricos atuais, esta tese introduz um novo conjunto de dados biométricos (imagens da face e vídeos do tipo de passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho dos métodos do estado-da-arte no reconhecimento de indivíduos em imagens/vídeos capturados num ambiente real de vídeo-vigilância. Como tal, este conjunto foi utilizado para promover a primeira competição de reconhecimento biométrico em ambientes não controlados. Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9 métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para aumentar a robustez aos fatores de degradação observados em ambientes de vídeo-vigilância. O primeiro é um método para detetar características corruptas em assinaturas biométricas através da análise da redundância entre subconjuntos de características. O segundo é um método de reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma única foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir as taxas de erro em dados adquiridos de forma não controlada

    Using Prior Knowledge for Verification and Elimination of Stationary and Variable Objects in Real-time Images

    Get PDF
    With the evolving technologies in the autonomous vehicle industry, now it has become possible for automobile passengers to sit relaxed instead of driving the car. Technologies like object detection, object identification, and image segmentation have enabled an autonomous car to identify and detect an object on the road in order to drive safely. While an autonomous car drives by itself on the road, the types of objects surrounding the car can be dynamic (e.g., cars and pedestrians), stationary (e.g., buildings and benches), and variable (e.g., trees) depending on if the location or shape of an object changes or not. Different from the existing image-based approaches to detect and recognize objects in the scene, in this research 3D virtual world is employed to verify and eliminate stationary and variable objects to allow the autonomous car to focus on dynamic objects that may cause danger to its driving. This methodology takes advantage of prior knowledge of stationary and variable objects presented in a virtual city and verifies their existence in a real-time scene by matching keypoints between the virtual and real objects. In case of a stationary or variable object that does not exist in the virtual world due to incomplete pre-existing information, this method uses machine learning for object detection. Verified objects are then removed from the real-time image with a combined algorithm using contour detection and class activation map (CAM), which helps to enhance the efficiency and accuracy when recognizing moving objects

    Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

    Get PDF
    In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper

    A deep semantic network-based image segmentation of soybean rust pathogens

    Get PDF
    IntroductionAsian soybean rust is a highly aggressive leaf-based disease triggered by the obligate biotrophic fungus Phakopsora pachyrhizi which can cause up to 80% yield loss in soybean. The precise image segmentation of fungus can characterize fungal phenotype transitions during growth and help to discover new medicines and agricultural biocides using large-scale phenotypic screens.MethodsThe improved Mask R-CNN method is proposed to accomplish the segmentation of densely distributed, overlapping and intersecting microimages. First, Res2net is utilized to layer the residual connections in a single residual block to replace the backbone of the original Mask R-CNN, which is then combined with FPG to enhance the feature extraction capability of the network model. Secondly, the loss function is optimized and the CIoU loss function is adopted as the loss function for boundary box regression prediction, which accelerates the convergence speed of the model and meets the accurate classification of high-density spore images.ResultsThe experimental results show that the mAP for detection and segmentation, accuracy of the improved algorithm is improved by 6.4%, 12.3% and 2.2% respectively over the original Mask R-CNN algorithm.DiscussionThis method is more suitable for the segmentation of fungi images and provide an effective tool for large-scale phenotypic screens of plant fungal pathogens
    • …
    corecore