98 research outputs found

    Video-Based Environment Perception for Automated Driving using Deep Neural Networks

    Get PDF
    Automatisierte Fahrzeuge benötigen eine hochgenaue Umfeldwahrnehmung, um sicher und komfortabel zu fahren. Gleichzeitig mĂŒssen die Perzeptionsalgorithmen mit der verfĂŒgbaren Rechenleistung die Echtzeitanforderungen der Anwendung erfĂŒllen. Kamerabilder stellen eine sehr wichtige Informationsquelle fĂŒr automatisierte Fahrzeuge dar. Sie beinhalten mehr Details als Daten von anderen Sensoren wie Lidar oder Radar und sind oft vergleichsweise gĂŒnstig. Damit ist es möglich, ein automatisiertes Fahrzeug mit einem Surround-View Sensor-Setup auszustatten, ohne die Gesamtkosten zu stark zu erhöhen. In dieser Arbeit prĂ€sentieren wir einen effizienten und genauen Ansatz zur videobasierten Umfeldwahrnehmung fĂŒr automatisierte Fahrzeuge. Er basiert auf Deep Learning und löst die Probleme der Objekterkennung, Objektverfolgung und der semantischen Segmentierung von Kamerabildern. Wir schlagen zunĂ€chst eine schnelle CNN-Architektur zur gleichzeitigen Objekterkennung und semantischen Segmentierung vor. Diese Architektur ist skalierbar, so dass Genauigkeit leicht gegen Rechenzeit eingetauscht werden kann, indem ein einziger Skalierungsfaktor geĂ€ndert wird. Wir modifizieren diese Architektur daraufhin, um Embedding-Vektoren fĂŒr jedes erkannte Objekt vorherzusagen. Diese Embedding-Vektoren werden als Assoziationsmetrik bei der Objektverfolgung eingesetzt. Sie werden auch fĂŒr einen neuartigen Algorithmus zur Non-Maximum Suppression eingesetzt, den wir FeatureNMS nennen. FeatureNMS kann in belebten Szenen, in denen die Annahmen des klassischen NMS-Algorithmus nicht zutreffen, einen höheren Recall erzielen. Wir erweitern anschlie{\ss}end unsere CNN-Architektur fĂŒr Einzelbilder zu einer Mehrbild-Architektur, welche zwei aufeinanderfolgende Videobilder als Eingabe entgegen nimmt. Die Mehrbild-Architektur schĂ€tzt den optischen Fluss zwischen beiden Videobildern innerhalb des kĂŒnstlichen neuronalen Netzwerks. Dies ermöglicht es, einen Verschiebungsvektor zwischen den Videobildern fĂŒr jedes detektierte Objekt zu schĂ€tzen. Diese Verschiebungsvektoren werden ebenfalls als Assoziationsmetrik bei der Objektverfolgung eingesetzt. Zuletzt prĂ€sentieren wir einen einfachen Tracking-by-Detection-Ansatz, der wenig Rechenleistung erfordert. Er benötigt einen starken Objektdetektor und stĂŒtzt sich auf die Embedding- und Verschiebungsvektoren, die von unserer CNN-Architektur geschĂ€tzt werden. Der hohe Recall des Objektdetektors fĂŒhrt zu einer hĂ€ufigen Detektion der verfolgten Objekte. Unsere diskriminativen Assoziationsmetriken, die auf den Embedding- und Verschiebungsvektoren basieren, ermöglichen eine zuverlĂ€ssige Zuordnung von neuen Detektionen zu bestehenden Tracks. Diese beiden Bestandteile erlauben es, ein einfaches Bewegungsmodell mit Annahme einer konstanten Geschwindigkeit und einem Kalman-Filter zu verwenden. Die von uns vorgestellten Methoden zur videobasierten Umfeldwahrnehmung erreichen gute Resultate auf den herausfordernden Cityscapes- und BDD100K-DatensĂ€tzen. Gleichzeitig sind sie recheneffizient und können die Echtzeitanforderungen der Anwendung erfĂŒllen. Wir verwenden die vorgeschlagene Architektur erfolgreich innerhalb des Wahrnehmungs-Moduls eines automatisierten Versuchsfahrzeugs. Hier hat sie sich in der Praxis bewĂ€hren können

    Efficient Semantic Segmentation for Resource-Constrained Applications with Lightweight Neural Networks

    Get PDF
    This thesis focuses on developing lightweight semantic segmentation models tailored for resource-constrained applications, effectively balancing accuracy and computational efficiency. It introduces several novel concepts, including knowledge sharing, dense bottleneck, and feature re-usability, which enhance the feature hierarchy by capturing fine-grained details, long-range dependencies, and diverse geometrical objects within the scene. To achieve precise object localization and improved semantic representations in real-time environments, the thesis introduces multi-stage feature aggregation, feature scaling, and hybrid-path attention methods

    Solar-Powered Deep Learning-Based Recognition System of Daily Used Objects and Human Faces for Assistance of the Visually Impaired

    Get PDF
    This paper introduces a novel low-cost solar-powered wearable assistive technology (AT) device, whose aim is to provide continuous, real-time object recognition to ease the finding of the objects for visually impaired (VI) people in daily life. The system consists of three major components: a miniature low-cost camera, a system on module (SoM) computing unit, and an ultrasonic sensor. The first is worn on the user’s eyeglasses and acquires real-time video of the nearby space. The second is worn as a belt and runs deep learning-based methods and spatial algorithms which process the video coming from the camera performing objects’ detection and recognition. The third assists on positioning the objects found in the surrounding space. The developed device provides audible descriptive sentences as feedback to the user involving the objects recognized and their position referenced to the user gaze. After a proper power consumption analysis, a wearable solar harvesting system, integrated with the developed AT device, has been designed and tested to extend the energy autonomy in the dierent operating modes and scenarios. Experimental results obtained with the developed low-cost AT device have demonstrated an accurate and reliable real-time object identification with an 86% correct recognition rate and 215 ms average time interval (in case of high-speed SoM operating mode) for the image processing. The proposed system is capable of recognizing the 91 objects oered by the Microsoft Common Objects in Context (COCO) dataset plus several custom objects and human faces. In addition, a simple and scalable methodology for using image datasets and training of Convolutional Neural Networks (CNNs) is introduced to add objects to the system and increase its repertory. It is also demonstrated that comprehensive trainings involving 100 images per targeted object achieve 89% recognition rates, while fast trainings with only 12 images achieve acceptable recognition rates of 55%

    Learning Attention Mechanisms and Context: An Investigation into Vision and Emotion

    Get PDF
    Attention mechanisms for context modelling are becoming ubiquitous in neural architectures in machine learning. The attention mechanism is a technique that filters out information that is irrelevant to a given task and focuses on learning task-dependent fixation points or regions. Furthermore, attention mechanisms suggest a question about a given task, i.e. `what' to learn and `where/how' to learn for task-specific context modelling. The context is the conditional variables instrumental in deciding the categorical distribution for the given data. Also, why is learning task-specific context necessary? In order to answer these questions, context modelling with attention in the vision and emotion domains is explored in this thesis using attention mechanisms with different hierarchical structures. The three main goals of this thesis are building superior classifiers using attention-based deep neural networks~(DNNs), investigating the role of context modelling in the given tasks, and developing a framework for interpreting hierarchies and attention in deep attention networks. In the vision domain, gesture and posture recognition tasks in diverse environments, are chosen. In emotion, visual and speech emotion recognition tasks are chosen. These tasks are selected for their sequential properties for modelling a spatiotemporal context. One of the key challenges from a machine learning standpoint is to extract patterns which bear maximum correlation with the information encoded in its signal while being as insensitive as possible to other types of information carried by the signal. A possible way to overcome this problem is to learn task-dependent representations. In order to achieve that, novel spatiotemporal context modelling networks and the mixture of multi-view attention~(MOMA) networks are proposed using bidirectional long-short-term memory network (BLSTM), convolutional neural network~(CNN), Capsule and attention networks. A framework has been proposed to interpret the internal attention states with respect to the given task. The results of the classifiers in the assigned tasks are compared with the \textit{state-of-the-art} DNNs, and the proposed classifiers achieve superior results. The context in speech emotion recognition is explored deeply with the attention interpretation framework, and it shows that the proposed model can assign word importance based on acoustic context. Furthermore, it has been observed that the internal states of the attention bear correlation with human perception of acoustic cues for speech emotion recognition. Overall, the results demonstrate superior classifiers and context learning models with interpretable frameworks. The findings are very important for speech emotion recognition systems. In this thesis, not only better models are produced, but also the interpretability of those models are explored, and their internal states are analysed. The phones and words are aligned with the attention vectors, and it is seen that the vowel sounds are more important for defining emotion acoustic cues than the consonants, and the model can assign word importance based on acoustic context. Also, how these approaches for emotion recognition using word importance for predicting emotions are demonstrated by the attention weight visualisation over the words. In a broader perspective, the findings from the thesis about gesture, posture and emotion recognition may be helpful in tasks like human-robot interaction~(HRI) and conversational artificial agents (such as Siri, Alexa). The communication is grounded with the symbolic and sub-symbolic cues of intent either from visual, audio or haptics. The understanding of intent is much dependent on the reasoning about the situational context. Emotion, i.e.\ speech and visual emotion, provides context to a situation, and it is a deciding factor in the response generation. Emotional intelligence and information from vision, audio and other modalities are essential for making human-human and human-robot communication more natural and feedback-driven

    Localizing spatially and temporally objects and actions in videos

    Get PDF
    The rise of deep learning has facilitated remarkable progress in video understanding. This thesis addresses three important tasks of video understanding: video object detection, joint object and action detection, and spatio-temporal action localization. Object class detection is one of the most important challenges in computer vision. Object detectors are usually trained on bounding-boxes from still images. Recently, video has been used as an alternative source of data. Yet, training an object detector on one domain (either still images or videos) and testing on the other one results in a significant performance gap compared to training and testing on the same domain. In the first part of this thesis, we examine the reasons behind this performance gap. We define and evaluate several domain shift factors: spatial location accuracy, appearance diversity, image quality, aspect distribution, and object size and camera framing. We examine the impact of these factors by comparing the detection performance before and after cancelling them out. The results show that all five factors affect the performance of the detectors and their combined effect explains the performance gap. While most existing approaches for detection in videos focus on objects or human actions separately, in the second part of this thesis we aim at detecting non-human centric actions, i.e., objects performing actions, such as cat eating or dog jumping. We introduce an end-to-end multitask objective that jointly learns object-action relationships. We compare it with different training objectives, validate its effectiveness for detecting object-action pairs in videos, and show that both tasks of object and action detection benefit from this joint learning. In experiments on the A2D dataset [Xu et al., 2015], we obtain state-of-the-art results on segmentation of object-action pairs. In the third part, we are the first to propose an action tubelet detector that leverages the temporal continuity of videos instead of operating at the frame level, as state-of-the-art approaches do. The same way modern detectors rely on anchor boxes, our tubelet detector is based on anchor cuboids by taking as input a sequence of frames and outputing tubelets, i.e., sequences of bounding boxes with associated scores. Our tubelet detector outperforms all state of the art on the UCF-Sports [Rodriguez et al., 2008], J-HMDB [Jhuang et al., 2013a], and UCF-101 [Soomro et al., 2012] action localization datasets especially at high overlap thresholds. The improvement in detection performance is explained by both more accurate scores and more precise localization

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications

    Get PDF
    The present book contains all of the articles that were accepted and published in the Special Issue of MDPI’s journal Mathematics titled "Computational Intelligence and Human–Computer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of human–computer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, human–computer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations

    Sensor Independent Deep Learning for Detection Tasks with Optical Satellites

    Get PDF
    The design of optical satellite sensors varies widely, and this variety is mirrored in the data they produce. Deep learning has become a popular method for automating tasks in remote sensing, but currently it is ill-equipped to deal with this diversity of satellite data. In this work, sensor independent deep learning models are proposed, which are able to ingest data from multiple satellites without retraining. This strategy is applied to two tasks in remote sensing: cloud masking and crater detection. For cloud masking, a new dataset---the largest ever to date with respect to the number of scenes---is created for Sentinel-2. Combination of this with other datasets from the Landsat missions results in a state-of-the-art deep learning model, capable of masking clouds on a wide array of satellites, including ones it was not trained on. For small crater detection on Mars, a dataset is also produced, and state-of-the-art deep learning approaches are compared. By combining datasets from sensors with different resolutions, a highly accurate sensor independent model is trained. This is used to produce the largest ever database of crater detections for any solar system body, comprising 5.5 million craters across Isidis Planitia, Mars using CTX imagery. Novel geospatial statistical techniques are used to explore this database of small craters, finding evidence for large populations of distant secondary impacts. Across these problems, sensor independence is shown to offer unique benefits, both regarding model performance and scientific outcomes, and in the future can aid in many problems relating to data fusion, time series analysis, and on-board applications. Further work on a wider range of problems is needed to determine the generalisability of the proposed strategies for sensor independence, and extension from optical sensors to other kinds of remote sensing instruments could expand the possible applications of this new technique
    • 

    corecore