6 research outputs found

    The detection of handguns from live-video in real-time based on deep learning

    Get PDF
    Many people have been killed indiscriminately by the use of handguns in different countries. Terroristacts, online fighting games and mentally disturbed people are considered the common reasons for these crimes.  A real-time handguns detection surveillance system is built to overcome these badacts, based on convolutional neural networks (CNNs). This method is focused on the detection of different weapons, such as (handgun and rifles). The identification of handguns from surveillance cameras and images requires monitoring by human supervisor, that can cause errors. To overcome this issue,the designed detection system sends an alert message to the supervisor when aweapon is detected. In the proposed detection system, a pre-trained deep learning model MobileNetV3-SSDLite is used to perform the handgundetection operation. This model has been selected becauseit is fast and accurate in infering to integrate network for detecting and classifying weaponsin images. The experimental result using global handguns datasets of various weapons showed that the use of MobileNetV3 with SSDLite model bothenhance the accuracy level in identifying the real time handguns detection

    CNN-based Gender Prediction in Uncontrolled Environments

    Get PDF
    With the increasing amount of data produced and collected, the use of artificial intelligence technologies has become inevitable. By using deep learning techniques from these technologies, high performance can be achieved in tasks such as classification and face analysis in the fields of image processing and computer vision. In this study, Convolutional Neural Networks (CNN), one of the deep learning algorithms, was used. The model created with this algorithm was trained with facial images and gender prediction was made. As a result of the experiments, 93.71% success rate was achieved on the VGGFace2 data set and 85.52% success rate on the Adience data set. The aim of the study is to classify low-resolution images with high accuracy

    IoT-Based Solution for Paraplegic Sufferer to Send Signals to Physician via Internet

    Full text link
    We come across hospitals and non-profit organizations that care for people with paralysis who have experienced all or portion of their physique being incapacitated by the paralyzing attack. Due to a lack of motor coordination by their mind, these persons are typically unable to communicate their requirements because they can speak clearly or use sign language. In such a case, we suggest a system that enables a disabled person to move any area of his body capable of moving to broadcast a text on the LCD. This method also addresses the circumstance in which the patient cannot be attended to in person and instead sends an SMS message using GSM. By detecting the user part's tilt direction, our suggested system operates. As a result, patients can communicate with physicians, therapists, or their loved ones at home or work over the web. Case-specific data, such as heart rate, must be continuously reported in health centers. The suggested method tracks the body of the case's pulse rate and other comparable data. For instance, photoplethysmography is used to assess heart rate. The decoded periodic data is transmitted continually via a Microcontroller coupled to a transmitting module. The croaker's cabin contains a receiver device that obtains and deciphers data as well as constantly exhibits it on Graphical interfaces viewable on the laptop. As a result, the croaker can monitor and handle multiple situations at once

    Proposed Face Detection Classification Model Based on Amazon Web Services Cloud (AWS)

    Get PDF
    One of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our cameras system to capture the images and upload them to the Amazon Simple Storage Service (AWS S3) cloud. Then two detectors were running, Haar cascade and multitask cascaded convolutional neural networks (MTCNN), at the Amazon Elastic Compute (AWS EC2) cloud, after that the output results of these two detectors are compared using accuracy and execution time. Then the classified non-permission images are uploaded to the AWS S3 cloud. The validation accuracy of the offline augmentation face detection classification model reached 98.81%, and the loss and mean square error were decreased to 0.0176 and 0.0064, respectively. The execution time of all AWS cloud systems for one image when using Haar cascade and MTCNN detectors reached three and seven seconds, respectively

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate people’s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ‘Superstitious Approach’ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected people’s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ‘colourful’ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naïve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate coders’ emotional states to others. The analysis showed no significant correlation between coders’ emotional states, depicted in their mental representation of faces and verifiers’ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ‘colourful’ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in coders’ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if coders’ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between coders’ mood, depicted in their mental representations of faces and verifiers’ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict coders’ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces
    corecore