4 research outputs found
Recommended from our members
Holoscopic 3D perception for autonomous vehicles
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonAutonomous mobile platforms are going to be huge part of the future transportation and autonomous navigation is the critical part of autonomous platforms. An autonomous mobile platform navigates the vehicle by perceiving the environment through the sensors mount on the vehicle, and acting on the data it receives from these sensors by making sense of the environmental and surroundings. As a result, an autonomous mobile platform
consists of localisation aka positioning and path planning. Both of them require very accurate sensor measurements. In terms of accuracy, sensor can generally be divided into two groups (a) High accuracy sensors like the state-of-the-art in LiDAR and vision sensors e.g. mobile-eye sensor. (b) Low accuracy sensors whereas GPS (accurate within 2-10 metres) sensor and IMU (suffering from drifts) could be fused to improve the other method of positioning. These are expensive process due to offline map creation. To deal with low
accuracy sensors, researchers normally use very complex models, which again run into performance reliability and consistency issue. Furthermore, it is common believe, that when navigating autonomously, perception and
situation cognisance is an important component to navigate safely and there have been a huge research on AI enabled perception such as Mobile Eye and Tesla car which uses 2D cameras for its perception. In this research, an innovative method is proposed to use rich vision sensor holoscopic 3D camera for environment perception with artificial intelligent algorithms to observe road objects and learn their 3D behavioural for reliable detection and recognition. The sensor provides rich information - 3D cubic visual information about the
environment including the very valuable “depth information” to imitate third coordinate of real world. To learn the objects, different AI algorithms are studied and in particular deep learning model is proposed that provides a reasonable good result. To evaluate the innovative holoscopic 3D sensor, we applied to face recognition challenge under different face expression where 2D images are considered to fail. However the holoscopic 3D sensor outperform and delivered outstanding performance by recognising faces under different expression by only training on the neutral face using a simple AI algorithm. Then we design and develop holoscopic perception database of 200000 frames for autonomous car. The experimental result has shown a promising result that AI algorithm, particularly deep learning algorithm learns effectively from holoscopic 3D content compared to traditional 2D images even those DL models which were designed for visual features yet holoscopic 3D images contain motion data which shall be exploited
Recommended from our members
Hand gesture recognition using deep learning neural networks
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHuman Computer Interaction (HCI) is a broad field involving different types of interactions including gestures. Gesture recognition concerns non-verbal motions used as a means of communication in HCI. A system may be utilised to identify human gestures to convey information for device control. This represents a significant field within HCI involving device interfaces and users. The aim of gesture recognition is to record gestures that are formed in a certain way and then detected by a device such as a camera. Hand gestures can be used as a form of communication for many different applications. It may be used by people who possess different disabilities, including those with hearing-impairments, speech impairments and stroke patients, to communicate and fulfil their basic needs.
Various studies have previously been conducted relating to hand gestures. Some studies proposed different techniques to implement the hand gesture experiments. For image processing there are multiple tools to extract features of images, as well as Artificial Intelligence which has varied classifiers to classify different types of data. 2D and 3D hand gestures request an effective algorithm to extract images and classify various mini gestures and movements. This research discusses this issue using different algorithms. To detect 2D or 3D hand gestures, this research proposed image processing tools such as Wavelet Transforms and Empirical Mode Decomposition to extract image features. The Artificial Neural Network (ANN) classifier which used to train and classify data besides Convolutional Neural Networks (CNN). These methods were examined in terms of multiple parameters such as execution time, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood, negative likelihood, receiver operating characteristic, area under ROC curve and root mean square. This research discusses four original contributions in the field of hand gestures. The first contribution is an implementation of two experiments using 2D hand gesture video where ten different gestures are detected in short and long distances using an iPhone 6 Plus with 4K resolution. The experiments are performed using WT and EMD for feature extraction while ANN and CNN for classification. The second contribution comprises 3D hand gesture video experiments where twelve gestures are recorded using holoscopic imaging system camera. The third contribution pertains experimental work carried out to detect seven common hand gestures. Finally, disparity experiments were performed using the left and the right 3D hand gesture videos to discover disparities. The results of comparison show the accuracy results of CNN being 100% compared to other techniques. CNN is clearly the most appropriate method to be used in a hand gesture system.Imam Abdulrahman bin Faisal Universit
Recommended from our members
Fingers micro-gesture recognition based on holoscopic 3D imaging system
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonMicro-gesture recognition has been widely research in recent years, in particular there
has been a great focus on 3D micro-gesture recognition which consists of classifying the
micro-gesture movements of the fingers for touch-less control applications. Holoscopic
3D imaging system mimics fly’s eye technique to capture true 3D scene which is enrich
in both texture and motion information. As a result, holoscopic 3D imaging system shall
be a suitable approach for robust recognition application. This PhD research focuses on
innovative 3D micro-gesture recognition based on holoscopic 3D system which delivers
robust and reliable performance with precision for 3D micro-gestures. Indeed this can
be applied to other wide range of applications such as Internet of things (IoT), AR/VR,
robotics and other touch-less interaction.
Due to lack of holoscopic 3D dataset, a comprehensive 3D micro-gesture dataset (HoMG)
includes both holoscopic 3D images and videos is prepared. It is a reasonable size holoscopic
3D dataset which is captured with different camera settings and conditions from
40 participants. Innovative 3D micro-gesture recognition is proposed based on 2D feature
extraction methods with basic classification methods, the recognition accuracy can reach
around 50.9%. For video-based data, the 3D feature extraction methods are achieved
66.7% recognition accuracy over 50.9% accuracy for micro-gesture images as the initial
investigation. HoMG database held a challenge in IEEE International automatic face and
gesture 2018, and 4 groups from the international research institutes joined the challenge
and contributed many new methods as further development where the proposed method
was published.
The holoscopic 3D dataset further enrich innovative micro-gesture 3D recognition system
is proposed and its performance is evaluated by carrying out like to like comparison
with state of the art methods. In addition, a fast and efficient pre-processing algorithm
for H3D images to extract the element images. Simplified viewpoint image extraction
method are presented. A pre-trained CNN model with the attention mechanics is implemented
based on VP image for the predicted probabilities of gesture. The proposed
approached is further improved using voting strategy. The proposed approach achieves
87% accuracy, which outperform all existing state of the art methods on the image-based
database. Advanced 3D micro-gesture recognition is investigated based on sequence video database,
the end-to-end model has been used on effective H3D based micro-gesture recognition
system. For front-end network, there are two method of traditional viewpoint image
extraction and novel pseudo viewpoint image extraction have been used and evaluated.
The pseudo viewpoint (PVP) front-end has been created, which used to deep learning
networks understanding the implied 3D information of H3D imaging system. The viewpoint
(VP) front-end follows the traditional H3D image method to extract and reconstruct
the multi-viewpoint images. Both front-end have been feed in four popular advanced
deep networks using for learning and classification. This experiments evaluated the performance
of 2D/3D convolutional, mixing 2D and 3D convolutional and LSTM on the
HoMG video database, which is beneficial to H3D imaging system using deep learning
network. Finally, in order to obtain the high accuracies, the majority voting has been applied
for further improve. The final results show that the performance is not only better
than the traditional methods, but also superior to the existing deep learning based approaches,
which clearly demonstrates the effectiveness of the proposed approach
Recommended from our members
ReSCon '12, Research Student Conference: Book of Abstracts
The fifth SED Research Student Conference (ReSCon2012) was hosted over three days, 18-20 June 2012, in the Hamilton Centre at Brunel University. The conference consisted of 130 oral and 70 poster presentations, based on the high quality and diverse research being conducted within the School of Engineering and Design by postgraduate research students. The conference is held annually, and ReSCon plays a key role in contributing to research and innovations within the School