1,131 research outputs found
To Drive or to Be Driven? The Impact of Autopilot, Navigation System, and Printed Maps on Driver’s Cognitive Workload and Spatial Knowledge
The technical advances in navigation systems should enhance the driving experience,
supporting drivers’ spatial decision making and learning in less familiar or unfamiliar environments.
Furthermore, autonomous driving systems are expected to take over navigation and driving in the
near future. Yet, previous studies pointed at a still unresolved gap between environmental exploration
using topographical maps and technical navigation means. Less is known about the impact of the
autonomous system on the driver’s spatial learning. The present study investigates the development
of spatial knowledge and cognitive workload by comparing printed maps, navigation systems, and
autopilot in an unfamiliar virtual environment. Learning of a new route with printed maps was
associated with a higher cognitive demand compared to the navigation system and autopilot. In
contrast, driving a route by memory resulted in an increased level of cognitive workload if the route
had been previously learned with the navigation system or autopilot. Way-finding performance
was found to be less prone to errors when learning a route from a printed map. The exploration
of the environment with the autopilot was not found to provide any compelling advantages for
landmark knowledge. Our findings suggest long-term disadvantages of self-driving vehicles for
spatial memory representations
Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veÃculo autônomo
Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veÃculos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefÃcios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. VeÃculos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veÃculos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veÃculo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veÃculos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veÃculos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possÃveis caminhos a partir do centro de referencia do veÃculo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veÃculo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Deep learning based 3D object detection for automotive radar and camera fusion
La percepción en el dominio de los vehÃculos autónomos es una disciplina clave para lograr
la automatización de los Sistemas Inteligentes de Transporte. Por ello, este Trabajo Fin de Máster
tiene como objetivo el desarrollo de una técnica de fusión sensorial para RADAR y cámara que
permita crear una representación del entorno enriquecida para la Detección de Objetos 3D
mediante algoritmos Deep Learning. Para ello, se parte de la idea de PointPainting [1] y se
adapta a un sensor en auge, el RADAR 3+1D, donde nube de puntos RADAR e información
semántica de la cámara son agregadas para generar una representación enriquecida del entorno.Perception in the domain of autonomous vehicles is a key discipline to achieve the au tomation of Intelligent Transport Systems. Therefore, this Master Thesis aims to develop a
sensor fusion technique for RADAR and camera to create an enriched representation of the
environment for 3D Object Detection using Deep Learning algorithms. To this end, the idea
of PointPainting [1] is used as a starting point and is adapted to a growing sensor, the 3+1D
RADAR, in which the radar point cloud is aggregated with the semantic information from the
camera.Máster Universitario en IngenierÃa Industrial (M141
Enabling Remote Responder Bio-Signal Monitoring in a Cooperative Human–Robot Architecture for Search and Rescue
The roles of emergency responders are challenging and often physically demanding, so it is essential that their duties are performed safely and effectively. In this article, we address real-time bio-signal sensor monitoring for responders in disaster scenarios. In particular, we propose the integration of a set of health monitoring sensors suitable for detecting stress, anxiety and physical fatigue in an Internet of Cooperative Agents architecture for search and rescue (SAR) missions (SAR-IoCA), which allows remote control and communication between human and robotic agents and the mission control center. With this purpose, we performed proof-of-concept experiments with a bio-signal sensor suite worn by firefighters in two high-fidelity SAR exercises. Moreover, we conducted a survey, distributed to end-users through the Fire Brigade consortium of the Provincial Council of Málaga, in order to analyze the firefighters’ opinion about biological signals monitoring while on duty. As a result of this methodology, we propose a wearable sensor suite design with the aim of providing some easy-to-wear integrated-sensor garments, which are suitable for emergency worker activity. The article offers discussion of user acceptance, performance results and learned lessons.This work has been partially funded by the Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, projects RTI2018-093421-B-I00 and PID2021-122944OB-I00. Partial funding for open access charge: Universidad de Málag
Machine learning-based motion type classification from 5G data
Abstract. To improve the quality of their services and products, nowadays every industry is using artificial intelligence and machine learning. Machine learning is a powerful tool that can be applied in many applications including wireless communications. One way to improve the reliability of wireless connections is to classify motion type of the user and hook it with beamforming and beam steering. With the user equipment’s motion type classification ability, the base station can allocate proper beamforming to the given class of users. With this motivation, the studies of ML algorithms for motion classification is conducted in this thesis. In this work, the supervised learning technique is used to predict and classify motion types using the 5G data. In this work, we used the 5G data collected in 4 different scenarios or classes which are (i) Walking (ii) Standing (ii) Driving and (iv) Drone. The data is then operated on for cleaning and feature engineering and then is fed into different classification algorithms including Logistic Regression Cross Validation (LRCv), Support Vector Classifier (SVC), k-nearest neighbors (KNN), Linear Discriminant Analysis (LDA), AdaBoost, and Extra Tree Classifier. Upon analyzing the evaluation metrics for these algorithms, we found that with the accuracy of ~99% and log-loss of 0.044, Extra Tree Classifier performed better than others. With such promising results, the output of classification process can be used in another pipeline for resource optimization or hooked with hardware for beamforming and beam steering. It can also be used as an input to a digital twin of radio to change its variables dynamically which will be reflected in the physical copy of that radio
Operator State Estimation for Adaptive Aiding in Uninhabited Combat Air Vehicles
This research demonstrated the first closed-loop implementation of adaptive automation using operator functional state in an operationally relevant environment. In the Uninhabited Combat Air Vehicle (UCAV) environment, operators can become cognitively overloaded and their performance may decrease during mission critical events. This research demonstrates an unprecedented closed-loop system, one that adaptively aids UCAV operators based on their cognitive functional state A series of experiments were conducted to 1) determine the best classifiers for estimating operator functional state, 2) determine if physiological measures can be used to develop multiple cognitive models based on information processing demands and task type, 3) determine the salient psychophysiological measures in operator functional state, and 4) demonstrate the benefits of intelligent adaptive aiding using operator functional state. Aiding the operator actually improved performance and increased mission effectiveness by 67%
- …