5 research outputs found

    Önvezető funkciók megvalósítására alkalmas jármű Unreal Engine 4 alapú szimulációja: Self-driving vehicle simulation based on Unreal Engine 4

    Get PDF
    The development of self-driving, autonomous vehicles is amongst the fastest-developing fields. One of the most important elements of developments in connection with this topic is the processing and the application of vision sensor data. In order to use vision sensor data for environmental perception, neural networks, self-learning algorithms are applied. The calibration of visual sensors and the training of neural networks requires measurements and visual sensor data. The process of sensor data acquisition and training image set creation is time-consuming and cannot be considered as cost-effective. The aim of this paper is to present the creation process of a computer simulation with the purpose of simulating a vehicle mounted with visual sensors. The result of the simulation process is presented through examples and use cases for a specific passenger car and visual sensor. The application of environmental parameters will be separately presented. By the use of the presented computer simulation method, it is possible to replace the time-consuming and expensive measurement and data acquisition processes by a simulated vehicle and sensor model. Kivonat Az önvezető, vagy akár autonóm járművek fejlesztése napjaink hangsúlyos területévé vált. Ezen területen végzett fejlesztések fontos eleme sok egyéb mellett a jármű fedélzeti vizuális szenzoraiból kinyert képinformációk megfelelő feldolgozása és alkalmazása. A képinformációk feldolgozásához neurális hálókra, öntanuló algoritmusokra van szükség. A szenzorok kalibrálásához és a neurális hálók tanításához számos mérésre, képadatra van szükség, melyek előállítása költséges és időigényes feladat. Jelen cikk célja olyan számítógépes szimulációs eljárások létrehozásának bemutatása, amelyek által önvezető funkciók megvalósítására alkalmas járművek képi információkat rögzítő fedélzeti érzékelői pontosan, valósághűen szimulálhatók. A szimulációs eljárások alkalmazása egy meghatározott jármű és szenzor esetében, példákon keresztül kerül bemutatásra. A szimuláció alkalmazásának bemutatása során külön esetként kerül kezelésre a környezeti viszonyok szimulátoron belüli paraméterezhetősége. A szimulált jármű és szenzor által lehetőség nyílik a korábban említett időigényes és költséges folyamatok szimulációval történő kiváltására – a vizuális szenzorok kalibrációja és a képadatgyűjtés lehetővé válik szimulációk használatával. &nbsp

    Capsule Networks for Object Segmentation Using Virtual World Dataset

    Get PDF
    The classical convolutional neural networks performance looks exceptionally great when the test dataset are very close to the training dataset. But when it is not possible, the accuracy of neural networks may even be reduced. The capsule networks are trying to solve the problems of the classical neural networks. Capsule networks are a brand new type of artificial neural networks, introduced by Geoffrey Hinton and his research team. In this work we would like to training capsule based neural networks for segmentation tasks, when the training set and test set are very different. For the training we use only computer generated virtual data, and we test our networks on real world data. We created three different capsule based architectures, based on classical neural network architectures, such as U-Net, PSP Net and ResNet. Experiences show how capsule networks are efficient in this special case

    Using Prior Knowledge for Verification and Elimination of Stationary and Variable Objects in Real-time Images

    Get PDF
    With the evolving technologies in the autonomous vehicle industry, now it has become possible for automobile passengers to sit relaxed instead of driving the car. Technologies like object detection, object identification, and image segmentation have enabled an autonomous car to identify and detect an object on the road in order to drive safely. While an autonomous car drives by itself on the road, the types of objects surrounding the car can be dynamic (e.g., cars and pedestrians), stationary (e.g., buildings and benches), and variable (e.g., trees) depending on if the location or shape of an object changes or not. Different from the existing image-based approaches to detect and recognize objects in the scene, in this research 3D virtual world is employed to verify and eliminate stationary and variable objects to allow the autonomous car to focus on dynamic objects that may cause danger to its driving. This methodology takes advantage of prior knowledge of stationary and variable objects presented in a virtual city and verifies their existence in a real-time scene by matching keypoints between the virtual and real objects. In case of a stationary or variable object that does not exist in the virtual world due to incomplete pre-existing information, this method uses machine learning for object detection. Verified objects are then removed from the real-time image with a combined algorithm using contour detection and class activation map (CAM), which helps to enhance the efficiency and accuracy when recognizing moving objects

    An Automated Training of Deep Learning Networks by 3D Virtual Models for Object Recognition

    No full text
    Small series production with a high level of variability is not suitable for full automation. So, a manual assembly process must be used, which can be improved by cooperative robots and assisted by augmented reality devices. The assisted assembly process needs reliable object recognition implementation. Currently used technologies with markers do not work reliably with objects without distinctive texture, for example, screws, nuts, and washers (single colored parts). The methodology presented in the paper introduces a new approach to object detection using deep learning networks trained remotely by 3D virtual models. Remote web application generates training input datasets from virtual 3D models. This new approach was evaluated by two different neural network models (Faster RCNN Inception v2 with SSD, MobileNet V2 with SSD). The main advantage of this approach is the very fast preparation of the 2D sample training dataset from virtual 3D models. The whole process can run in Cloud. The experiments were conducted with standard parts (nuts, screws, washers) and the recognition precision achieved was comparable with training by real samples. The learned models were tested by two different embedded devices with an Android operating system: Virtual Reality (VR) glasses, Cardboard (Samsung S7), and Augmented Reality (AR) smart glasses (Epson Moverio M350). The recognition processing delays of the learned models running in embedded devices based on an ARM processor and standard x86 processing unit were also tested for performance comparison
    corecore