751 research outputs found
Crowd Size Estimation and Detecting Social Distancing Using Raspberry PI and Opencv
In this covid19 pandemic the number of people gathering at public places and festivals are restricted and maintaining social distancing is practiced throughout the world. Managing the crowd is always a challenging task. It requires some kind of monitoring technology. In this paper, we develop a device that detects and provide human count and also detects people who are not maintaining social distancing . The work depicted above was finished using a Raspberry Pi 3 board with OpenCV-Python.This method can effectively manage crowds
Recommended from our members
Neurocomputing for internet of things: object recognition and detection strategy
Modern and new integrated technologies have changed the traditional systems by using more advanced machine learning, artificial intelligence methods, new generation standards, and smart and intelligent devices. The new integrated networks like the Internet of Things (IoT) and 5G standards offer various benefits and services. However, these networks have suffered from multiple object detection, localization, and classification issues. Conventional Neural Networks (CNN) and their variants have been adopted for object detection, classification, and localization in IoT networks to create autonomous devices to make decisions and perform tasks without human intervention and helpful to learn in-depth features. Motivated by these facts, this paper investigates existing object detection and recognition techniques by using CNN models used in IoT networks. This paper presents a Conventional Neural Networks for 5G-Enabled Internet of Things Network (CNN-5GIoT) model for moving and static objects in IoT networks after a detailed comparison. The proposed model is evaluated with existing models to check the accuracy of real-time tracking. The proposed model is more efficient for real-time object detection and recognition than conventional methods
Simplified Video Surveillance Framework for Dynamic Object Detection under Challenging Environment
An effective video surveillance system is highly essential in order to ensure constructing better form of video analytics. Existing review of literatures pertaining to video analytics are found to directly implement algorithms on the top of the video file without much emphasis on following problems i.e. i) dynamic orientation of subject, ii)poor illumination condition, iii) identification and classification of subjects, and iv) faster response time. Therefore, the proposed system implements an analytical concept that uses depth-image of the video feed along with the original colored video feed to apply an algorithm for extracting significant information about the motion blob of the dynamic subjects. Implemented in MATLAB, the study outcome shows that it is capable of addressing all the above mentioned problems associated with existing research trends on video analytics by using a very simple and non-iterative process of implementation. The applicability of the proposed system in practical world is thereby proven
Human Action Recognition and Monitoring in Ambient Assisted Living Environments
Population ageing is set to become one of the most significant challenges of the 21st century, with implications for almost all sectors of society. Especially in developed countries, governments should immediately implement policies and solutions to facilitate the needs of an increasingly older population. Ambient Intelligence (AmI) and in particular the area of Ambient Assisted Living (AAL) offer a feasible response, allowing the creation of human-centric smart environments that are sensitive and responsive to the needs and behaviours of the user.
In such a scenario, understand what a human being is doing, if and how he/she is interacting with specific objects, or whether abnormal situations are occurring is critical.
This thesis is focused on two related research areas of AAL: the development of innovative vision-based techniques for human action recognition and the remote monitoring of users behaviour in smart environments.
The former topic is addressed through different approaches based on data extracted from RGB-D sensors.
A first algorithm exploiting skeleton joints orientations is proposed. This approach is extended through a multi-modal strategy that includes the RGB channel to define a number of temporal images, capable of describing the time evolution of actions.
Finally, the concept of template co-updating concerning action recognition is introduced. Indeed, exploiting different data categories (e.g., skeleton and RGB information) improve the effectiveness of template updating through co-updating techniques.
The action recognition algorithms have been evaluated on CAD-60 and CAD-120, achieving results comparable with the state-of-the-art. Moreover, due to the lack of datasets including skeleton joints orientations, a new benchmark named Office Activity Dataset has been internally acquired and released.
Regarding the second topic addressed, the goal is to provide a detailed implementation strategy concerning a generic Internet of Things monitoring platform that could be used for checking users' behaviour in AmI/AAL contexts
Internet of Things with Deep Learning-Based Face Recognition Approach for Authentication in Control Medical Systems
Internet of Things (IoT) with deep learning (DL) is drastically growing and plays a significant role in many applications, including medical and healthcare systems. It can help users in this field get an advantage in terms of enhanced touchless authentication, especially in spreading infectious diseases like coronavirus disease 2019 (COVID-19). Even though there is a number of available security systems, they suffer from one or more of issues, such as identity fraud, loss of keys and passwords, or spreading diseases through touch authentication tools. To overcome these issues, IoT-based intelligent control medical authentication systems using DL models are proposed to enhance the security factor of medical and healthcare places effectively. This work applies IoT with DL models to recognize human faces for authentication in smart control medical systems. We use Raspberry Pi (RPi) because it has low cost and acts as the main controller in this system. The installation of a smart control system using general-purpose input/output (GPIO) pins of RPi also enhanced the antitheft for smart locks, and the RPi is connected to smart doors. For user authentication, a camera module is used to capture the face image and compare them with database images for getting access. The proposed approach performs face detection using the Haar cascade techniques, while for face recognition, the system comprises the following steps. The first step is the facial feature extraction step, which is done using the pretrained CNN models (ResNet-50 and VGG-16) along with linear binary pattern histogram (LBPH) algorithm. The second step is the classification step which can be done using a support vector machine (SVM) classifier. Only classified face as genuine leads to unlock the door; otherwise, the door is locked, and the system sends a notification email to the home/medical place with detected face images and stores the detected person name and time information on the SQL database. The comparative study of this work shows that the approach achieved 99.56% accuracy compared with some different related methods.publishedVersio
DeePLT: Personalized Lighting Facilitates by Trajectory Prediction of Recognized Residents in the Smart Home
In recent years, the intelligence of various parts of the home has become one
of the essential features of any modern home. One of these parts is the
intelligence lighting system that personalizes the light for each person. This
paper proposes an intelligent system based on machine learning that
personalizes lighting in the instant future location of a recognized user,
inferred by trajectory prediction. Our proposed system consists of the
following modules: (I) human detection to detect and localize the person in
each given video frame, (II) face recognition to identify the detected person,
(III) human tracking to track the person in the sequence of video frames and
(IV) trajectory prediction to forecast the future location of the user in the
environment using Inverse Reinforcement Learning. The proposed method provides
a unique profile for each person, including specifications, face images, and
custom lighting settings. This profile is used in the lighting adjustment
process. Unlike other methods that consider constant lighting for every person,
our system can apply each 'person's desired lighting in terms of color and
light intensity without direct user intervention. Therefore, the lighting is
adjusted with higher speed and better efficiency. In addition, the predicted
trajectory path makes the proposed system apply the desired lighting, creating
more pleasant and comfortable conditions for the home residents. In the
experimental results, the system applied the desired lighting in an average
time of 1.4 seconds from the moment of entry, as well as a performance of
22.1mAp in human detection, 95.12% accuracy in face recognition, 93.3% MDP in
human tracking, and 10.80 MinADE20, 18.55 MinFDE20, 15.8 MinADE5 and 30.50
MinFDE5 in trajectory prediction
Optimization for Deep Learning Systems Applied to Computer Vision
149 p.Since the DL revolution and especially over the last years (2010-2022), DNNs have become an essentialpart of the CV field, and they are present in all its sub-fields (video-surveillance, industrialmanufacturing, autonomous driving, ...) and in almost every new state-of-the-art application that isdeveloped. However, DNNs are very complex and the architecture needs to be carefully selected andadapted in order to maximize its efficiency. In many cases, networks are not specifically designed for theconsidered use case, they are simply recycled from other applications and slightly adapted, without takinginto account the particularities of the use case or the interaction with the rest of the system components,which usually results in a performance drop.This research work aims at providing knowledge and tools for the optimization of systems based on DeepLearning applied to different real use cases within the field of Computer Vision, in order to maximizetheir effectiveness and efficiency
- …