8 research outputs found

    Improving handgun detectors with human pose classification

    Get PDF
    [Abstract] Unfortunately, attacks with firearms such as handguns have become too common. CCTV surveillance systems can potentially help to prevent this kind of incidents, but require continuous human supervision, which is not feasible in practice. Image-based handgun detectors allow the automatic location of these weapons to send alerts to the security staff. Deep learning has been recently used for this purpose. However, the precision and sensitivity of these systems are not generally satisfactory, causing in most cases both false alarms and undetected handguns, particularly when the firearm is far from the camera. This paper proposes the use of information related to the pose of the subject to improve the performance of current handgun detectors. More concretely, a human full-body pose classifier has been developed which is capable of separating between shooting poses and other non-dangerous poses. The classified pose is then used to reduce both the number of false positives (FP) and false negatives (FN). The proposed method has been tested with several datasets and handgun detectors, showing an improvement under various metrics.This work was partially funded by projects PDC2021-121197-C22 (funded by MCIN/AEI/ 10.13039/501100011033 and by the European Union NextGenerationEU/PRTR) and SBPLY/21/180501/000025 (funded by the Autonomous Government of Castilla-La Mancha and the European Regional Development Fund -ERDF-). The first author is supported by Postgraduate Grant PRE2018-083772 from the Spanish Ministry of Science, Innovation, and Universities.Junta de Comunidades de Castilla-La Mancha; SBPLY/21/180501/00002

    Handgun detection using combined human pose and weapon appearance

    Get PDF
    Closed-circuit television (CCTV) systems are essential nowadays to prevent security threats or dangerous situations, in which early detection is crucial. Novel deep learning-based methods have allowed to develop automatic weapon detectors with promising results. However, these approaches are mainly based on visual weapon appearance only. For handguns, body pose may be a useful cue, especially in cases where the gun is barely visible. In this work, a novel method is proposed to combine, in a single architecture, both weapon appearance and human pose information. First, pose keypoints are estimated to extract hand regions and generate binary pose images, which are the model inputs. Then, each input is processed in different subnetworks and combined to produce the handgun bounding box. Results obtained show that the combined model improves the handgun detection state of the art, achieving from 4.23 to 18.9 AP points more than the best previous approach.Comment: 17 pages, 18 figure

    A review on low-cost microscopes for Open Science

    No full text
    14 pags., 8 figs., 1 tab.This article presents a review after an exhaustive search that yielded 23 works carried out in the last decade for the availability of optical microscopes with open hardware as a low-cost alternative to commercial systems. These works were developed with the aim of covering needs within several areas such as: Bio Sciences research in institutions with limited resources, diagnosis of diseases and health screenings in large populations in developing countries, and training in educational contexts with a need for high availability of equipment and low replacement cost. The analysis of the selected works allows us to classify the analyzed solutions into two main categories, for which their essential characteristics are enumerated: portable field microscopes and multipurpose automated microscopes. Moreover, this work includes a discussion on the degree of maturity of the solutions in terms of the adoption of practices aligned with the development of Open Science.This work was supported in part by Junta de Comunidades de Castilla-La Mancha under project HIPERDEEP (Ref. SBPLY/19/180501/000273)Peer reviewe

    Detection and classification of parasitic egg in microscopy images

    No full text
    [Resumen] Las afecciones por parásitos intestinales son un grave problema de salud con un alto impacto en algunas áreas geográficas. Dado que actualmente la evaluación de estas enfermedades se realiza de forma manual a través de expertos, es posible aplicar técnicas de aprendizaje automático para ayudar en el desarrollo de esta tarea, reduciendo al menos la carga de trabajo. Esto podría llevar a un menor tiempo de detección de la enfermedad y a la aplicación de un tratamiento adecuado más rápidamente. En el contexto del aprendizaje profundo, se han propuesto muchas técnicas de detección de objetos, validadas en conjuntos de datos de propósito general, como ImageNet o COCO. En este trabajo, proponemos una unión de varias de ellas, incluyendo técnicas recientes basadas en Transformers, para afrontar esta tarea particular. Como resultado, la unión de los métodos TOOD, Cascade-RCNN (Swin-Transformer), Cascade-RCNN (ConvNeXt) y YOLOX, aplicados a la detección de este tipo de imágenes, consigue un valor de 0,915 para la métrica Intersección sobre la Unión (Intersection over Union, IoU), el cual es mayor que los resultados obtenidos por cada uno de los métodos por separado.[Abstract] Intestinal parasitic infections are a healthcare problem with a high impact in some areas. While nowadays the assessment performed by experts is mostly manual, it is possible to introduce machine learning techniques to help automating this task, or at least reduce the workload. This can lead to shorter detection times and faster treatment application. In the context of deep learning, several object detection techniques have been proposed and validated on general purpose datasets such as ImageNet or COCO. In this work, an ensemble of these, including recent Transformer-based techniques, is proposed for this particular task. The merged detections of TOOD, Cascade-RCNN (Swin-Transformer), Cascade-RCNN (ConvNeXt) and YOLOX applied to this problem achieved 0.915 for the Intersection over Union metric (IoU), which is larger than the result that each method obtained independently.Este trabajo ha sido parcialmente financiado por el proyecto TIN2017-82113-C2-2-R del Ministerio de Economía y Competitividad de España, el proyecto DISARM (PDC2021-121197) financiado por el MCIN/AEI/ 10.13039/501100011033 y “European Union NextGenerationEU/PRTR” y el proyecto SBPLY/17/180501/000543 y SBPLY/21/180501/000025 de la Junta de Comunidades de Castilla-La Mancha; así como por los Contratos Predoctorales de Formación FPU17/04758 y PRE2018-083772 del Ministerio de Ciencia, Innovación y Universidades de España.Junta de Comunidades de Castilla-La Mancha; SBPLY/17/180501/000543Junta de Comunidades de Castilla-La Mancha; SBPLY/21/180501/00002

    Firearm-related action recognition and object detection dataset for video surveillance systems

    No full text
    The proposed dataset is comprised of 398 videos, each featuring an individual engaged in specific video surveillance actions. The ground truth for this dataset was expertly curated and is presented in JSON format (standard COCO), offering vital information about the dataset, video frames, and annotations, including precise bounding boxes outlining detected objects. The dataset encompasses three distinct categories for object detection: ''Handgun'', ''Machine_Gun'', and ''No_Gun'', dependent on the video's content. This dataset serves as a resource for research in firearm-related action recognition, firearm detection, security, and surveillance applications, enabling researchers and practitioners to develop and evaluate machine learning models for the detection of handguns and rifles across various scenarios. The meticulous ground truth annotations facilitate precise model evaluation and performance analysis, making this dataset an asset in the field of computer vision and public safety

    Artificial intelligence methods for the prediction of chemical components from hyperspectral images

    No full text
    [Resumen] La automatización del análisis de los parámetros del suelo puede optimizar los procesos de fertilización, reduciendo el tiempo y los costes de la producción alimentaria, contribuyendo al objetivo de conseguir una agricultura sostenible. El trabajo que se presenta en este artículo es parte del HYPERVIEW Challenge: Seeing Beyond the Visible, en el cual se proporcionan imágenes hiperespectrales de varios terrenos y se pretende lograr una estimación de varios parámetros característicos de los mismos a partir de ellas. Se propone la utilización de métodos basados tanto en enfoques tradicionales (SVR, k-NN), como en técnicas modernas como las redes neuronales. Además, se propone la combinación de ambos para aprovechar todas las ventajas de cada uno. Adicionalmente, se aplica una fase de preprocesado parametrizada para tratar la variabilidad del tamaño de los datos de entrada, dado que las imágenes del terreno no tienen las mismas dimensiones en todos los casos. Finalmente, la mejor estimación se ha conseguido aplicando un modelo basado en k-NN.[Abstract] Automating the analysis of soil parameters can optimize the fertilization process, saving time and reducing the costs of food production, leading to a more sustainable agriculture. The work presented in this paper is part of the HYPERVIEW Challenge: Seeing Beyond the Visible, which aims to estimate soil parameters from hyperspectral images. Several methods are proposed, based both on traditional approaches such as SVR and k-NN, as well as modern neural networks. A combination of both approaches is considered as well. In addition, a parameterized preprocessing stage has been proposed to deal with the varying size of the input data. The best results have been obtained with a model based on k-NN.Junta de Comunidades de Castilla-La Mancha; SBPLY/19/180501/00027
    corecore