3 research outputs found

    A Novel Low Processing Time System for Criminal Activities Detection Applied to Command and Control Citizen Security Centers

    Full text link
    [EN] This paper shows a Novel Low Processing Time System focused on criminal activities detection based on real-time video analysis applied to Command and Control Citizen Security Centers. This system was applied to the detection and classification of criminal events in a real-time video surveillance subsystem in the Command and Control Citizen Security Center of the Colombian National Police. It was developed using a novel application of Deep Learning, specifically a Faster Region-Based Convolutional Network (R-CNN) for the detection of criminal activities treated as "objects" to be detected in real-time video. In order to maximize the system efficiency and reduce the processing time of each video frame, the pretrained CNN (Convolutional Neural Network) model AlexNet was used and the fine training was carried out with a dataset built for this project, formed by objects commonly used in criminal activities such as short firearms and bladed weapons. In addition, the system was trained for street theft detection. The system can generate alarms when detecting street theft, short firearms and bladed weapons, improving situational awareness and facilitating strategic decision making in the Command and Control Citizen Security Center of the Colombian National Police.This work was co-funded by the European Commission as part of H2020 call SEC-12-FCT-2016-Subtopic3 under the project VICTORIA (No. 740754). This publication reflects the views only of the authors and the Commission cannot be held responsible for any use which may be made of the information contained therein.Suarez-Paez, J.; Salcedo-Gonzalez, M.; Climente, A.; Esteve Domingo, M.; Gomez, J.; Palau Salvador, CE.; Pérez Llopis, I. (2019). A Novel Low Processing Time System for Criminal Activities Detection Applied to Command and Control Citizen Security Centers. Information. 10(12):1-19. https://doi.org/10.3390/info10120365S1191012Wang, L., Rodriguez, R. M., & Wang, Y.-M. (2018). A dynamic multi-attribute group emergency decision making method considering expertsr hesitation. International Journal of Computational Intelligence Systems, 11(1), 163. doi:10.2991/ijcis.11.1.13Esteve, M., Perez-Llopis, I., & Palau, C. E. (2013). Friendly Force Tracking COTS solution. IEEE Aerospace and Electronic Systems Magazine, 28(1), 14-21. doi:10.1109/maes.2013.6470440Senst, T., Eiselein, V., Kuhn, A., & Sikora, T. (2017). Crowd Violence Detection Using Global Motion-Compensated Lagrangian Features and Scale-Sensitive Video-Level Representation. IEEE Transactions on Information Forensics and Security, 12(12), 2945-2956. doi:10.1109/tifs.2017.2725820Shi, Y., Tian, Y., Wang, Y., & Huang, T. (2017). Sequential Deep Trajectory Descriptor for Action Recognition With Three-Stream CNN. IEEE Transactions on Multimedia, 19(7), 1510-1520. doi:10.1109/tmm.2017.2666540Arunnehru, J., Chamundeeswari, G., & Bharathi, S. P. (2018). Human Action Recognition using 3D Convolutional Neural Networks with 3D Motion Cuboids in Surveillance Videos. Procedia Computer Science, 133, 471-477. doi:10.1016/j.procs.2018.07.059Kamel, A., Sheng, B., Yang, P., Li, P., Shen, R., & Feng, D. D. (2019). Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(9), 1806-1819. doi:10.1109/tsmc.2018.2850149Zhang, B., Wang, L., Wang, Z., Qiao, Y., & Wang, H. (2018). Real-Time Action Recognition With Deeply Transferred Motion Vector CNNs. IEEE Transactions on Image Processing, 27(5), 2326-2339. doi:10.1109/tip.2018.2791180Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2016). Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1), 142-158. doi:10.1109/tpami.2015.2437384Suarez-Paez, J., Salcedo-Gonzalez, M., Esteve, M., Gómez, J. A., Palau, C., & Pérez-Llopis, I. (2018). Reduced computational cost prototype for street theft detection based on depth decrement in Convolutional Neural Network. Application to Command and Control Information Systems (C2IS) in the National Police of Colombia. International Journal of Computational Intelligence Systems, 12(1), 123. doi:10.2991/ijcis.2018.25905186Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137-1149. doi:10.1109/tpami.2016.2577031Hao, S., Wang, P., & Hu, Y. (2019). Haze Image Recognition Based on Brightness Optimization Feedback and Color Correction. Information, 10(2), 81. doi:10.3390/info10020081Peng, M., Wang, C., Chen, T., & Liu, G. (2016). NIRFaceNet: A Convolutional Neural Network for Near-Infrared Face Identification. Information, 7(4), 61. doi:10.3390/info7040061NVIDIA CUDA® Deep Neural Network library (cuDNN)https://developer.nvidia.com/cuda-downloadsWu, X., Lu, X., & Leung, H. (2018). A Video Based Fire Smoke Detection Using Robust AdaBoost. Sensors, 18(11), 3780. doi:10.3390/s18113780Park, J. H., Lee, S., Yun, S., Kim, H., & Kim, W.-T. (2019). Dependable Fire Detection System with Multifunctional Artificial Intelligence Framework. Sensors, 19(9), 2025. doi:10.3390/s19092025García-Retuerta, D., Bartolomé, Á., Chamoso, P., & Corchado, J. M. (2019). Counter-Terrorism Video Analysis Using Hash-Based Algorithms. Algorithms, 12(5), 110. doi:10.3390/a12050110Zhao, B., Zhao, B., Tang, L., Han, Y., & Wang, W. (2018). Deep Spatial-Temporal Joint Feature Representation for Video Object Detection. Sensors, 18(3), 774. doi:10.3390/s18030774He, Z., & He, H. (2018). Unsupervised Multi-Object Detection for Video Surveillance Using Memory-Based Recurrent Attention Networks. Symmetry, 10(9), 375. doi:10.3390/sym10090375Muhammad, K., Hamza, R., Ahmad, J., Lloret, J., Wang, H., & Baik, S. W. (2018). Secure Surveillance Framework for IoT Systems Using Probabilistic Image Encryption. IEEE Transactions on Industrial Informatics, 14(8), 3679-3689. doi:10.1109/tii.2018.2791944Barthélemy, J., Verstaevel, N., Forehead, H., & Perez, P. (2019). Edge-Computing Video Analytics for Real-Time Traffic Monitoring in a Smart City. Sensors, 19(9), 2048. doi:10.3390/s19092048Aqib, M., Mehmood, R., Alzahrani, A., Katib, I., Albeshri, A., & Altowaijri, S. M. (2019). Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs. Sensors, 19(9), 2206. doi:10.3390/s19092206Xu, S., Zou, S., Han, Y., & Qu, Y. (2018). Study on the Availability of 4T-APS as a Video Monitor and Radiation Detector in Nuclear Accidents. Sustainability, 10(7), 2172. doi:10.3390/su10072172Plageras, A. P., Psannis, K. E., Stergiou, C., Wang, H., & Gupta, B. B. (2018). Efficient IoT-based sensor BIG Data collection–processing and analysis in smart buildings. Future Generation Computer Systems, 82, 349-357. doi:10.1016/j.future.2017.09.082Jha, S., Dey, A., Kumar, R., & Kumar-Solanki, V. (2019). A Novel Approach on Visual Question Answering by Parameter Prediction using Faster Region Based Convolutional Neural Network. International Journal of Interactive Multimedia and Artificial Intelligence, 5(5), 30. doi:10.9781/ijimai.2018.08.004Cho, S., Baek, N., Kim, M., Koo, J., Kim, J., & Park, K. (2018). Face Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network. Sensors, 18(9), 2995. doi:10.3390/s18092995Zhang, J., Xing, W., Xing, M., & Sun, G. (2018). Terahertz Image Detection with the Improved Faster Region-Based Convolutional Neural Network. Sensors, 18(7), 2327. doi:10.3390/s18072327Bakheet, S., & Al-Hamadi, A. (2016). A Discriminative Framework for Action Recognition Using f-HOL Features. Information, 7(4), 68. doi:10.3390/info7040068(2018). Robust Eye Blink Detection Based on Eye Landmarks and Savitzky–Golay Filtering. Information, 9(4), 93. doi:10.3390/info9040093Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. doi:10.1145/3065386Jetson Embedded Development Kit|NVIDIAhttps://developer.nvidia.com/embedded-computingNVIDIA TensorRT|NVIDIA Developerhttps://developer.nvidia.com/tensorrtNVIDIA DeepStream SDK|NVIDIA Developerhttps://developer.nvidia.com/deepstream-sdkFraga-Lamas, P., Fernández-Caramés, T., Suárez-Albela, M., Castedo, L., & González-López, M. (2016). A Review on Internet of Things for Defense and Public Safety. Sensors, 16(10), 1644. doi:10.3390/s16101644Gomez, C., Shami, A., & Wang, X. (2018). Machine Learning Aided Scheme for Load Balancing in Dense IoT Networks. Sensors, 18(11), 3779. doi:10.3390/s18113779AMD Embedded RadeonTMhttps://www.amd.com/en/products/embedded-graphic

    Object detection algorithms to identify skeletal components in carbonate cores

    Get PDF
    Identification of constituent grains in carbonate rocks requires specialist experience. A carbonate sedimentologist must be able to distinguish between skeletal grains that change through geological ages, preserved in differing alteration stages, and cut in random orientations across core sections. Recent studies have demonstrated the effectiveness of machine learning in classifying lithofacies from thin section, core, and seismic images, with faster analysis times and reduction of natural biases. In this study, we explore the application and limitations of convolutional neural network (CNN) based object detection frameworks to identify and quantify multiple types of carbonate grains within close-up core images of carbonate lithologies. We compiled nearly 400 images of high-resolution core images from three ODP and IODP expeditions. Over 9000 individual carbonate components of 11 different classes were manually labelled from this dataset. Using pre-trained weights, a transfer learning approach was applied to evaluate one-stage (YOLO v5) and two-stage (Faster R–CNN) detectors under different feature extractors (CSP-Darknet53 and ResNet50-FPN, respectively). Despite the current popularity of one-stage detectors, our results show Faster R–CNN with ResNet50-FPN backbone provides the most robust performance, achieving 0.73 mean average precision (mAP). Furthermore, we extend the approach by deploying the trained model to two ODP sites from Leg 194 that were not part of the training set (ODP Sites 1196 and 1199), providing a performance comparison with benchmark human interpretation

    A Systematic Review on Object Localisation Methods in Images

    Full text link
    [EN] Currently, many applications require a precise localization of the objects that appear in an image, to later process them. This is the case of visual inspection in the industry, computer-aided clinical diagnostic systems, the obstacle detection in vehicles or in robots, among others. However, several factors such as the quality of the image and the appearance of the objects to be detected make this automatic location difficult. In this article, we carry out a systematic revision of the main methods used to locate objects by considering since the methods based on sliding windows, as the detector proposed by Viola and Jones, until the current methods that use deep learning networks, such as Faster-RCNN or Mask-RCNN. For each proposal, we describe the relevant details, considering their advantages and disadvantages, as well as the main applications of these methods in various areas. This paper aims to provide a clean and condensed review of the state of the art of these techniques, their usefulness and their implementations in order to facilitate their knowledge and use by any researcher that requires locating objects in digital images. We conclude this work by summarizing the main ideas presented and discussing the future trends of these methods.[ES] Actualmente, muchas aplicaciones requieren localizar de forma precisa los objetos que aparecen en una imagen, para su posterior procesamiento. Este es el caso de la inspección visual en la industria, los sistemas de diagnóstico clínico asistido por computador, la detección de obstáculos en vehículos o en robots, entre otros. Sin embargo, diversos factores como la calidad de la imagen y la apariencia de los objetos a detectar, dificultan la localización automática. En este artículo realizamos una revisión sistemática de los principales métodos utilizados para localizar objetos, considerando desde los métodos basados en ventanas deslizantes, como el detector propuesto por Viola y Jones, hasta los métodos actuales que usan redes de aprendizaje profundo, tales como Faster-RCNNo Mask-RCNN. Para cada propuesta, describimos los detalles relevantes, considerando sus ventajas y desventajas, así como sus aplicaciones en diversas áreas. El artículo pretende proporcionar una revisión ordenada y condensada del estado del arte de estas técnicas, su utilidad y sus implementaciones a fin de facilitar su conocimiento y uso por cualquier investigador que requiera localizar objetos en imágenes digitales. Concluimos este trabajo resumiendo las ideas presentadas y discutiendo líneas de trabajo futuro.Este trabajo ha sido financiado parcialmente por diferentes instituciones. Deisy Chaves cuenta con una beca “Estudios de Doctorado en Colombia 2013” de COLCIENCIAS. Surajit Saikia cuenta con una beca de la Junta de Castilla y León con referencia EDU/529/2017. También queremos agradecer el apoyo de INCIBE (Instituto Nacional de Ciberseguridad) mediante la Adenda 22 al convenio con la Universidad de León.Chaves, D.; Saikia, S.; Fernández-Robles, L.; Alegre, E.; Trujillo, M. (2018). Una Revisión Sistemática de Métodos para Localizar Automáticamente Objetos en Imágenes. Revista Iberoamericana de Automática e Informática industrial. 15(3):231-242. https://doi.org/10.4995/riai.2018.10229OJS231242153Akselrod-Ballin, A., Karlinsky, L., Alpert, S., Hasoul, S., Ben-Ari, R., Barkan, E., 2016. A region based convolutional network for tumor detection and classification in breast mammography. In: Deep Learning and Data Labe-ling for Medical Applications. pp. 197-205.Alexe, B., Deselaers, T., Ferrari, V., 2010. What is an object? In: CVPR. pp.73-80.Ammour, N., Alhichri, H., Bazi, Y., Benjdira, B., Alajlan, N., Zuair, M., 2017.Deep learning approach for car detection in uav imagery. Remote Sens. 9 (4). DOI:10.3390/rs9040312Boser, B. E., Guyon, I. M., Vapnik, V. N., 1992. A training algorithm for opti-mal margin classifiers. In: COLT. pp. 144-152.Brazil, G., Yin, X., Liu, X., 2017. Illuminating pedestrians via simultaneous detection & segmentation. CoRR abs/1706.08564.Cai, Z., Fan, Q., Feris, R. S., Vasconcelos, N., 2016. A unified multi-scale deep convolutional neural network for fast object detection. CoRRabs/1607.07155.Cao, X., Gong, G., Liu, M.,Qi, J., 2016. Foreign object debris detection on air-field pavement using region based convolution neural network. In: DICTA. pp. 1-6. DOI:10.1109/DICTA.2016.7797045Cao, X., Wang, P., Meng, C., Bai, X., Gong, G., Liu, M., Qi, J., 2018. Region based cnn for foreign object debris detection on airfield pavement. Sensors18 (3). DOI:10.3390/s18030737Chen, J., Liu, Z., Wang, H., Núñez, A., Han, Z., 2018. Automatic defect detection of fasteners on the catenary support device using deep convolutional neural network. IEEE T Instrum Meas 67 (2), 257-269. DOI:10.1109/TIM.2017.2775345Cireʂan, D. C., Giusti, A., Gambardella, L. M., Schmidhuber, J., 2013. Mitosis detection in breast cancer histology images with deep neural networks. In: MICCAI. pp. 411-418.Coifman, B., McCord, M., Mishalani, R. G., Iswalt, M., Ji, Y., 2006. Roadway traffic monitoring from an unmanned aerial vehicle. IEE Proceedings - Intelligent Transport Systems 153 (1),11-20. DOI:10.1049/ip-its:20055014Dai, J., Li, Y., He, K., Sun, J., 2016. R-FCN: object detection via region-based fully convolutional networks. CoRR abs/1605.06409.Dalal, N., Triggs, B., June2005. Histograms of oriented gradients for human detection. In: CVPR. Vol. 1. pp. 886-893 vol. 1. DOI:10.1109/CVPR.2005.177Deng, L., 2014. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing 3, e2.Deng, L., Yu, D., 2014. Deep learning: Methods and applications. Foundations and Trends in Signal Processing 7 (3-4), 197-387.Dollár, P., Tu, Z., Perona, P., Belongie, S. J., 2009. Integral channel features. In: BMVC. pp. 1-11.Dollar, P., Zitnick, L., 2013. Structured forests for fast edge detection. In: ICCV. pp. 1841-1848.Donoser, M., Bischof, H., 2006. Efficient maximally stable extremal region (mser) tracking. In: CVPR. pp. 553-560. DOI:10.1109/CVPR.2006.107Du, X., El-Khamy, M., Lee, J., Davis, L., 2017. Fused dnn: A deep neural net-work fusion approach to fast and robust pedestrian detection. In: WACV. pp.953-961. DOI:10.1109/WACV.2017.111Dženan, Z., Aleš, V., Jan, E., Daniel, H., Christopher, N., Andreas, K., 2014. Robust detection and segmentation for diagnosis of vertebral diseases using routine mr images. Computer Graphics Forum 33 (6), 190-204. DOI:10.1111/cgf.12343Felzenszwalb, P. F., Girshick, R. B., McAllester, D., Ramanan, D., 2010. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32 (9), 1627-1645. DOI:10.1109/TPAMI.2009.167Felzenszwalb, P. F., Huttenlocher, D. P., 2004. Efficient graph-based image segmentation. IJCV 59 (2), 167-181. DOI:10.1023/B:VISI.0000022288.19776.77Ferguson, M., Ak, R., Lee, Y. T. T., Law, K. H., 2017. Automatic localization of casting defects with convolutional neural networks. In: IEEE International Conference on Big Data. pp. 1726-1735. DOI:10.1109/BigData.2017.8258115Fernández-Robles, L., Azzopardi, G., Alegre, E., Petkov, N., 2017a. Machine-vision-based identification of broken inserts in edge profile milling heads. Robot Comput Integr Manuf 44, 276 - 283. DOI:https://doi.org/10.1016/j.rcim.2016.10.004Fernández-Robles, L., Azzopardi, G., Alegre, E., Petkov, N., Castejón-Limas ,M., 2017b. Identification of milling inserts in situ based on a versatile machine vision system. JMSY 45, 48 - 57. DOI: https://doi.org/10.1016/j.jmsy.2017.08.002Freund, Y., Schapire, R. E., 1999. A short introduction to boosting. In: IJCAI. pp. 1401-1406.García-Ordás, M. T., Alegre, E., González-Castro, V., Alaiz-Rodríguez, R.,2017. A computer vision approach to analyze and classify tool wear level in milling processes using shape descriptors and machine learning techniques. Int J Adv Manuf Technol 90 (5), 1947-1961. DOI:10.1007/s00170-016-9541-0García-Olalla, O., Alegre, E., Fernández-Robles, L., Fidalgo, E., Saikia, S., 2018. Textile retrieval based on image content from cdc and webcam cameras in indoor environments. Sensors 18 (5). DOI:10.3390/s18051329Garnett, N., Silberstein, S., Oron, S., Fetaya, E., Verner, U., Ayash, A., Goldner,V., Cohen, R., Horn, K., Levi, D., 2017. Real-time category-based and general obstacle detection for autonomous driving. In: ICCVW. pp. 198-205. DOI:10.1109/ICCVW.2017.32Girshick, R. B., 2015. Fast R-CNN. CoRR abs/1504.08083.Girshick, R. B., Donahue, J., Darrell, T., Malik, J., 2013. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRRabs/1311.2524.He, B., Xiao, D., Hu, Q., Jia, F., 2018. Automatic magnetic resonance image prostate segmentation based on adaptive feature learning probability boos-ting tree initialization and cnn-asm refinement. IEEE Access 6, 2005-2015.He, K., Gkioxari, G., Doll ́ar, P., Girshick, R. B., 2017. Mask R-CNN. CoRRabs/1703.06870.He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: CVPR. pp. 770-778.Heo, Y. J., Lee, D., Kang, J., Lee, K., Chung, W. K., 2017. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip. Scientific Reports 7 (1), 11651. DOI:10.1038/s41598-017-11534-0Hosang, J., Benenson, R., Doll ́ar, P., Schiele, B., 2016. What makes for effective detection proposals? IEEE Trans. Pattern Anal. Mach. Intell. 38 (4),814-830. DOI:10.1109/TPAMI.2015.2465908Jiamin, L., David, W., Le, L., Zhuoshi, W., Lauren, K., B., T. E., Berkman,S., A., P. N., M., S. R., 2017. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks. Medical Physics44 (9), 4630-4642. DOI:10.1002/mp.12399Jung, F., Kirschner, M., Wesarg, S., 2013. A generic approach to organ detection using 3d haar-like features. In: Bildverarbeitung für die Medizin 2013.pp. 320-325.Kisilev, P., Sason, E., Barkan, E., Hashoul, S., 2016. Medical image description nusing multi-task-loss cnn. In: Deep Learning and Data Labeling for Medical Applications. pp. 121-129.Krizhevsky, A., Sutskever, I., Hinton, G. E., 2012. Imagenet classification with deep convolutional neural networks. In: Adv Neural Inf Process Syst. pp. 1097-1105.Lampert, C. H., Blaschko, M. B., Hofmann, T., 2008. Beyond sliding windows: Object localization by efficient subwindow search. In: CVPR. pp. 1-8. DOI:10.1109/CVPR.2008.4587586Lecun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature 521, 436-444.Lee, C. J., Tseng, T. H., Huang, B. J., Jun-Weihsieh, Tsai, C. M., 2015. Obstacle detection and avoidance via cascade classifier for wheeled mobile robot. In: ICMLC. Vol. 1. pp. 403-407. DOI:10.1109/ICMLC.2015.7340955Lee, J., Wang, J., Crandall, D., Šabanovic, S., Fox, G., 2017. Real-time, cloud-based object detection for unmanned aerial vehicles. In: IRC. pp. 36-43. DOI:10.1109/IRC.2017.77Levi, D., Garnett, N., Fetaya, E., September 2015a. Stixelnet: A deep convolutional network for obstacle detection and road segmentation. In: BMVC. pp. 109.1-109.12. DOI:10.5244/C.29.109Levi, D., Garnett, N., Fetaya, E., 2015b. Stixelnet: A deep convolutional network for obstacle detection and road segmentation. In: BMVC. pp. 109.1-109.12. DOI:10.5244/C.29.109Li, J., Liang, X., Shen, S., Xu, T., Feng, J., Yan, S., 2018. Scale-aware fast r-cnn for pedestrian detection. IEEE Trans Multimedia 20 (4), 985-996. DOI:10.1109/TMM.2017.2759508Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A. C.,2016. Ssd: Single shot multibox detector. In: ECCV. pp. 21-37.Luo, S., Lu, H., Xiao, J., Yu, Q., Zheng, Z., 2017. Robot detection and localization based on deep learning. In: CAC. pp. 7091-7095.Ma, Y., Jiang, Z., Zhang, H., Xie, F., Zheng, Y., Shi, H., 2017. Proposing regions from histopathological whole slide image for retrieval using selective search. In: ISBI. pp. 156-159. DOI:10.1109/ISBI.2017.7950491Mery, D., Rio, V., Zscherpel, U., Mondrag ́on, G., Lillo, I., Zuccar, I., Lobel,H., Carrasco, M., 2015. Gdxray: The database of x-ray images for nondestructive testing. Journal of Nondestructive Evaluation 34 (4), 42. DOI:10.1007/s10921-015-0315-7Park, J.-K., Kwon, B.-K., Park, J.-H., Kang, D.-J., 2016. Machine learning-based imaging system for surface defect inspection. IJPEM-GT 3 (3), 303-310. DOI:10.1007/s40684-016-0039-xRedmon, J., Divvala, S. K., Girshick, R. B., Farhadi, A., 2015. You only look once: Unified, real-time object detection. CoRR abs/1506.02640.Ren, S., He, K., Girshick, R. B., Sun, J., 2015. Faster R-CNN: towards real-time object detection with region proposal networks. CoRR abs/1506.01497.Říha, K., Mašek, J., Burget, R., Beneš, R., Závodná, E., 2013. Novel method for localization of common carotid artery transverse section in ultrasound images using modified viola-jones detector. Ultrasound Med Biol 39 (10),1887 - 1902. DOI:10.1016/j.ultrasmedbio.2013.04.013Sa, R., Owens, W., Wiegand, R., Studin, M., Capoferri, D., Barooha, K.,Greaux, A., Rattray, R., Hutton, A., Cintineo, J., Chaudhary, V., 2017. Intervertebral disc detection in x-ray images using faster r-cnn. In: EMBC. pp. 564-567. DOI:10.1109/EMBC.2017.8036887Saikia, S., Fidalgo, E., Alegre, E., Fernández-Robles, L., 2017. Object detection for crime scene evidence analysis using deep learning. In: ICIAP. pp.14-24.Sepúlveda, G. V., Torriti, M. T.,Calero, M. F., 2017. Sistema de detección de señales de tráfico para la localización de intersecciones viales y frenado anticipado. Revista Iberoamericana de Automática e Informática Industrial14 (2), 152-162. DOI:10.1016/j.riai.2016.09.010Shah, V. R., Maru, S. V., Jhaveri, R. H., 2018. An obstacle detection scheme for vehicles in an intelligent transportation system. IJCNIS 8 (10), 23-28. DOI:10.5815/ijcnis.2016.10.03Shi, Y., Li, Y., Wei, X., Zhou, Y., 2017. A faster-rcnn based chemical fiber paper tube defect detection method. In: International Conference on Enterprise Systems. pp. 173-177. DOI:10.1109/ES.2017.35Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.Szegedy, C., Ioe, S., Vanhoucke, V., Alemi, A. A., 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI. pp. 4278-4284.Tang, T., Zhou, S., Deng, Z., Zou, H., Lei, L., 2017. Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining. Sensors 17 (2). DOI:10.3390/s17020336Tek, F., 2013. Mitosis detection using generic features and an ensemble of cascade adaboosts. J Pathol Inform 4 (1), 12. DOI:10.4103/2153-3539.112697Uijlings, J. R. R., van de Sande, K. E. A., Gevers, T., Smeulders, A. W. M. ,2013. Selective search for object recognition. IJCV 104 (2), 154-171.Viola, P., Jones, M. J., May 2004. Robust real-time face detection. IJCV 57 (2), 137-154 .DOI:10.1023/B:VISI.0000013087.49260.fbWang, S., Cheng, J., Liu, H., Tang, M., 2018. Pcn: Part and context information for pedestrian detection with cnns. CoRR abs/1804.04483.Xu, Y., Yu, G., Wang, Y., Ma, Y., 2017a. Car detection from low-altitude uav imagery with the faster r-cnn. JAT 2017. DOI:https://doi.org/10.1155/2017/2823617Xu, Y., Yu, G., Wang, Y., Wu, X., Ma, Y., 2016. A hybrid vehicle detection method based on viola-jones and hog+svm from uav images. Sensors 16 (8). DOI:10.3390/s16081325Xu, Y., Yu, G., Wu, X., Wang, Y., Ma, Y., 2017b. An enhanced viola-jones vehicle detection method from unmanned aerial vehicles imagery. IEEE trans Intell Transp Syst 18 (7), 1845-1856. DOI:10.1109/TITS.2016.2617202Yang, S., Fang, B., Tang, W., Wu, X., Qian, J., Yang, W., 2017. Faster r-cnn based microscopic cell detection. In: SPAC. pp. 345-350. DOI:10.1109/SPAC.2017.8304302Yi, X., Song, G., Derong, T., Dong, G., Liang, S., Yuqiong, W., 2018. Fast road obstacle detection method based on maximally stable extremal regions. IJARS 15 (1), 1-10. DOI:10.1177/1729881418759118Zeiler, M. D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: ECCV. pp. 818-833.Zhang, L., Lin, L., Liang, X., He, K., 2016. Is faster r-cnn doing well for pedestrian detection? In: ECCV. pp. 443-457.Zhong, J., Lei, T., Yao, G., 2017. Robust vehicle detection in aerial images based on cascaded convolutional neural networks. Sensors 17 (12). DOI:10.3390/s17122720Zitnick, L., Dollar, P., 2014. Edge boxes: Locating object proposals from edges. In: ECCV. pp. 391-405
    corecore