80,664 research outputs found
Face Recognition Door Lock
The goals of this project were to build a modern, easy-to-use, smart door lock that allows for accessible unlocking and adds convenience, utility, and security to your home. It allows users to open their door remotely via the accompanying Smart Lock mobile app, or hands-free by using face recognition via a camera mounted on the door. The system was made up of three major components, including a cloud back-end, an on-board logical unit, and a mobile application
Design and prototyping of a face recognition system on smart camera networks
The aim of this work is to design and develop a face recognition system running on smart camera
networks. In many systems, these are passively used to send video to a recording server. The
processing of the acquired data is mainly executed on remote and more powerful computers (or
clusters of computers). In this thesis a distributed architecture was developed where computer
vision algorithms are executed on smart cameras, which can exchange information to improve
resource balance. A smart camera network has been defined specifying the roles that client
nodes and a server have, and how nodes cooperate and communicate among them and with the
server. Smart cameras, initially look for changes in the environment. When motion is detected,
they perform face detection. Once a face is found, the camera itself processes it and tries to
asses whom it belongs to, using a local cache of recognizers. This cache stores a portion of the
whole information present on server side, and can be used to perform recognition tasks on the
smart cameras. If a node is not able to identify a face it sends a query to the server. Finally,
if the person’s id can be determined, either by the server or the client itself, the occurrence of
the correspondent recognizer is notified to the nearest nodes. Human faces that were not recognized,
are stored on the remote server and can be manually annotated. Clustering algorithms
have been tested in order to automatically group faces belonging to unknown people on server
side so they made the manual annotation easier. Extensive experiments have been performed
on a freely available dataset to both assess the recognition performance and the benefits of
using collaboration among cameras. Raspberry PI devices were used as camera network nodes.
Various tests were performed in order to verify the efficiency of the face recognition approach
on such devices
Visual intent recognition in a multiple camera environment
Activity recognition is an active field of research with many applications for both industrial and home use. Industry might use it as part of a security surveillance system, while home uses could be in applications such as smart rooms and aids for the disabled. This thesis develops one component of a “smart system” that can recognize certain activities related to the subject’s intent, i.e. where subjects concentrate their attention. A visual intent activity recognition system that operates in near real-time is created, based on multiple cameras. To accomplish this, a combination of face detection, facial feature detection, and pose estimation is used to estimate each subject’s gaze direction. To allow for better detection of the subject’s facial features, and thus more robust pose estimation, a multiple camera system is used. A wide-view camera is zoomed out and finds the subject, while a narrow-view camera zooms in to get more details on the face. Neural networks are then used to locate the mouth and eyes. A triangle template is matched to these features and used to estimate the subject’s pose in real-time. This method is used to determine where the subjects are looking and detect the activity of looking intently at a given location. A four-camera system recognizes the activity as occurring when at least one of two subjects is looking at the other. Testing showed that, on average, the pose estimate was accurate to within 5.08 degrees. The visual intent activity recognition system was able to correctly determine when one subject was looking at the other over 95% of the time
A study of children facial recognition for privacy in smart TV
© Springer International Publishing AG 2017. Nowadays Smart TV is becoming very popular in many families. Smart TV provides computing and connectivity capabilities with access to online services, such as video on demand, online games, and even sports and healthcare activities. For example, Google Smart TV, which is based on Google Android, integrates into the users’ daily physical activities through its ability to extract and access context information dependent on the surrounding environment and to react accordingly via built-in camera and sensors. Without a viable privacy protection system in place, however, the expanding use of Smart TV can lead to privacy violations through tracking and user profiling by broadcasters and others. This becomes of particular concern when underage users such as children who may not fully understand the concept of privacy are involved in using the Smart TV services. In this study, we consider digital imaging and ways to identify and properly tag pictures of children in order to prevent unwanted disclosure of personal information. We have conducted a preliminary experiment on the effectiveness of facial recognition technology in Smart TV where experimental recognition of child face presence in feedback image streams is conducted through the Microsoft’s Face Application Programming Interface
Recommended from our members
Real time facial expression recognition App development on mobile phones
Facial expression has made significant progress in recent years with many commercial systems are available for real-world applications. It gains strong interest to implement a
facial expression system on a portable device such as tablet and smart phone device using the camera already integrated in the devices. It is very common to see face recognition phone unlocking app in new smart phones which are proven to be hassle
free way to unlock a phone. Implementation a facial expression system in a smart phone would provide fun applications that can be used to measure the mood of the user in their daily life or as a tool for their daily monitoring of the motion in phycology studies.
However, traditional facial expression algorithms are normally computing extensive and can only be implemented offline at a computer. In this paper, a novel automatic system has been proposed to recognize emotions from face images on a smart phone in real-time. In our system, the camera of the smart phone is used to capture the face image, BRIEF features are extracted and k-nearest neighbor algorithm is implemented for the
classification. The experimental results demonstrate that the proposed facial expression recognition on mobile phone is successful and it gives up to 89.5% recognition accuracy.The work of Hongying Meng was supported by Brunel Research Initiative & Enterprise Fund on the project entitled “Automatic Emotional State Detection and Analysis on
Embedded Devices”. This research is also partially supported by the 973 project
on Network Big Data Analytics funded by the Ministry of Science and Technology, China. No. 2014CB340404
WSD: Wild Selfie Dataset for Face Recognition in Selfie Images
With the rise of handy smart phones in the recent years, the trend of
capturing selfie images is observed. Hence efficient approaches are required to
be developed for recognising faces in selfie images. Due to the short distance
between the camera and face in selfie images, and the different visual effects
offered by the selfie apps, face recognition becomes more challenging with
existing approaches. A dataset is needed to be developed to encourage the study
to recognize faces in selfie images. In order to alleviate this problem and to
facilitate the research on selfie face images, we develop a challenging Wild
Selfie Dataset (WSD) where the images are captured from the selfie cameras of
different smart phones, unlike existing datasets where most of the images are
captured in controlled environment. The WSD dataset contains 45,424 images from
42 individuals (i.e., 24 female and 18 male subjects), which are divided into
40,862 training and 4,562 test images. The average number of images per subject
is 1,082 with minimum and maximum number of images for any subject are 518 and
2,634, respectively. The proposed dataset consists of several challenges,
including but not limited to augmented reality filtering, mirrored images,
occlusion, illumination, scale, expressions, view-point, aspect ratio, blur,
partial faces, rotation, and alignment. We compare the proposed dataset with
existing benchmark datasets in terms of different characteristics. The
complexity of WSD dataset is also observed experimentally, where the
performance of the existing state-of-the-art face recognition methods is poor
on WSD dataset, compared to the existing datasets. Hence, the proposed WSD
dataset opens up new challenges in the area of face recognition and can be
beneficial to the community to study the specific challenges related to selfie
images and develop improved methods for face recognition in selfie images
The Privacy Leakage of IP Camera Systems
For in-home security, intelligent operations like top individual recognition and minimizing losses due to home break-ins, emergencies, and fraud are keys to success. This application integrates the closed-circuit television (CCTV) camera and the deep learning algorithms used to process these images. Automated intrusion detection alerts, real-time fire alerts, smart checkout, and potentially fraudulent point of sale (POS) transactions are its main features. Dynamic intrusion with machine learning is a software program in which the price of certain products changes over time through an algorithm that considers a variety of pricing variables. The face locator is a part of the algorithm that locates and detects motion by using the image search function. The system collects all available product locations from the live videos from multiple cameras. This is a helpful feature for finding misplaced products and detecting POS user fraud. This intrusion detection system (IDS) records POS transaction details on the screen as an overlay on video images to reduce home break-ins. To improve the ease and speed of transaction searches, the faces of individuals are used to search for disputed cases. Smart Checkout System (SCS) utilizes a self-service kiosk where users can generate bills by showing products to the linked camera. SCS uses Google vision technology to identify products. Motion detector and queue detection will detect long queues at the checkout counter in real-time and open new lanes to speed up the transaction, improve the experience, and reduce the number of abandoned purchases. Face recognition premium and alerts can also be provided
- …