62 research outputs found

    Predictive Techniques for Scene Understanding by using Deep Learning in Autonomous Driving

    Get PDF
    La conducción autónoma es considerada uno de los más grandes retos tecnológicos de la actualidad. Cuando los coches autónomos conquisten nuestras carreteras, los accidentes se reducirán notablemente, hasta casi desaparecer, ya que la tecnología estará testada y no incumplirá las normas de conducción, entre otros beneficios sociales y económicos. Uno de los aspectos más críticos a la hora de desarrollar un vehículo autónomo es percibir y entender la escena que le rodea. Esta tarea debe ser tan precisa y eficiente como sea posible para posteriormente predecir el futuro de esta misma y ayudar a la toma de decisiones. De esta forma, las acciones tomadas por el vehículo garantizarán tanto la seguridad del vehículo en sí mismo y sus ocupantes, como la de los obstáculos circundantes, tales como viandantes, otros vehículos o infraestructura de la carretera. En ese sentido, esta tesis doctoral se centra en el estudio y desarrollo de distintas técnicas predictivas para el entendimiento de la escena en el contexto de la conducción autónoma. Durante la tesis, se observa una incorporación progresiva de técnicas de aprendizaje profundo en los distintos algoritmos propuestos para mejorar el razonamiento sobre qué está ocurriendo en el escenario de tráfico, así como para modelar las complejas interacciones entre la información social (distintos participantes o agentes del escenario, tales como vehículos, ciclistas o peatones) y física (es decir, la información geométrica, semántica y topológica del mapa de alta definición) presente en la escena. La capa de percepción de un vehículo autónomo se divide modularmente en tres etapas: Detección, Seguimiento (Tracking), y Predicción. Para iniciar el estudio de las etapas de seguimiento y predicción, se propone un algoritmo de Multi-Object Tracking basado en técnicas clásicas de estimación de movimiento y asociación validado en el dataset KITTI, el cual obtiene métricas del estado del arte. Por otra parte, se propone el uso de un filtro inteligente basado en información contextual de mapa, cuyo objetivo es monitorizar los agentes más relevantes de la escena en el tiempo, representando estos agentes filtrados la entrada preliminar para realizar predicciones unimodales basadas en un modelo cinemático. Para validar esta propuesta de filtro inteligente se usa CARLA (CAR Learning to Act), uno de los simuladores hiperrealistas para conducción autónoma más prometedores en la actualidad, comprobando cómo al usar información contextual de mapa se puede reducir notablemente el tiempo de inferencia de un algoritmo de tracking y predicción basados en métodos físicos, prestando atención a los agentes realmente relevantes del escenario de tráfico. Tras observar las limitaciones de un modelo de predicción basado en cinemática para la predicción a largo plazo de un agente, los distintos algoritmos de la tesis se centran en el módulo de predicción, usando los datasets Argoverse 1 y Argoverse 2, donde se asume que los agentes proporcionados en cada escenario de tráfico ya están monitorizados durante un cierto número de observaciones. En primer lugar, se introduce un modelo basado en redes neuronales recurrentes (particularmente redes LSTM, Long-Short Term Memory) y mecanismo de atención para codificar las trayectorias pasadas de los agentes, y una representación simplificada del mapa en forma de posiciones finales potenciales en la carretera para calcular las trayectorias futuras unimodales, todo envuelto en un marco GAN (Generative Adversarial Network), obteniendo métricas similares al estado del arte en el caso unimodal. Una vez validado el modelo anterior en Argoverse 1, se proponen distintos modelos base (sólo social, incorporando mapa, y una mejora final basada en Transformer encoder, redes convolucionales 1D y mecanismo de atención cruzada para la fusión de características) precisos y eficientes basados en el modelo de predicción anterior, introduciendo dos nuevos conceptos. Por un lado, el uso de redes neuronales gráficas (particularmente GCN, Graph Convolutional Network) para codificar de una forma potente las interacciones de los agentes. Por otro lado, se propone el preprocesamiento de trayectorias preliminares a partir de un mapa con un método heurístico. Gracias a estas entradas y una arquitectura más potente de codificación, los modelos base serán capaces de predecir distintas trayectorias futuras multimodales, es decir, cubriendo distintos posibles futuros para el agente de interés. Los modelos base propuestos obtienen métricas de regresión del estado del arte tanto en el caso multimodal como unimodal manteniendo un claro compromiso de eficiencia con respecto a otras propuestas. El modelo final de la tesis, inspirado en los modelos anteriores y validado en el más reciente dataset para algoritmos de predicción en conducción autónoma (Argoverse 2), introduce varias mejoras para entender mejor el escenario de tráfico y decodificar la información de una forma precisa y eficiente. Se propone incorporar información topológica y semántica de los carriles futuros preliminares con el método heurístico antes mencionado, codificación de mapa basada en aprendizaje profundo con redes GCN, ciclo de fusión de características físicas y sociales, estimación de posiciones finales en la carretera y agregación de su entorno circundante con aprendizaje profundo y finalmente módulo de refinado para mejorar la calidad de las predicciones multimodales finales de un modo elegante y eficiente. Comparado con el estado del arte, nuestro método logra métricas de predicción a la par con los métodos mejor posicionados en el Leaderboard de Argoverse 2, reduciendo de forma notable el número de parámetros y operaciones de coma flotante por segundo. Por último, el modelo final de la tesis ha sido validado en simulación en distintas aplicaciones de conducción autónoma. En primer lugar, se integra el modelo para proporcionar predicciones a un algoritmo de toma de decisiones basado en aprendizaje por refuerzo en el simulador SMARTS (Scalable Multi-Agent Reinforcement Learning Training School), observando en los estudios como el vehículo es capaz de tomar mejores decisiones si conoce el comportamiento futuro de la escena y no solo el estado actual o pasado de esta misma. En segundo lugar, se ha realizado un estudio de adaptación de dominio exitoso en el simulador hiperrealista CARLA en distintos escenarios desafiantes donde el entendimiento de la escena y predicción del entorno son muy necesarios, como una autopista o rotonda con gran densidad de tráfico o la aparición de un usuario vulnerable de la carretera de forma repentina. En ese sentido, el modelo de predicción ha sido integrado junto con el resto de capas de la arquitectura de navegación autónoma del grupo de investigación donde se desarrolla la tesis como paso previo a su implementación en un vehículo autónomo real

    Future Transportation

    Get PDF
    Greenhouse gas (GHG) emissions associated with transportation activities account for approximately 20 percent of all carbon dioxide (co2) emissions globally, making the transportation sector a major contributor to the current global warming. This book focuses on the latest advances in technologies aiming at the sustainable future transportation of people and goods. A reduction in burning fossil fuel and technological transitions are the main approaches toward sustainable future transportation. Particular attention is given to automobile technological transitions, bike sharing systems, supply chain digitalization, and transport performance monitoring and optimization, among others

    Generating Privacy-Compliant, Utility-Preserving Synthetic Tabular and Relational Datasets Through Deep Learning

    Get PDF
    Due tendenze hanno rapidamente ridefinito il panorama dell'intelligenza artificiale (IA) negli ultimi decenni. La prima è il rapido sviluppo tecnologico che rende possibile un'intelligenza artificiale sempre più sofisticata. Dal punto di vista dell'hardware, ciò include una maggiore potenza di calcolo ed una sempre crescente efficienza di archiviazione dei dati. Da un punto di vista concettuale e algoritmico, campi come l'apprendimento automatico hanno subito un'impennata e le sinergie tra l'IA e le altre discipline hanno portato a sviluppi considerevoli. La seconda tendenza è la crescente consapevolezza della società nei confronti dell'IA. Mentre le istituzioni sono sempre più consapevoli di dover adottare la tecnologia dell'IA per rimanere competitive, questioni come la privacy dei dati e la possibilità di spiegare il funzionamento dei modelli di apprendimento automatico sono diventate parte del dibattito pubblico. L'insieme di questi sviluppi genera però una sfida: l'IA può migliorare tutti gli aspetti della nostra vita, dall'assistenza sanitaria alla politica ambientale, fino alle opportunità commerciali, ma poterla sfruttare adeguatamente richiede l'uso di dati sensibili. Purtroppo, le tecniche di anonimizzazione tradizionali non forniscono una soluzione affidabile a suddetta sfida. Non solo non sono sufficienti a proteggere i dati personali, ma ne riducono anche il valore analitico a causa delle inevitabili distorsioni apportate ai dati. Tuttavia, lo studio emergente dei modelli generativi ad apprendimento profondo (MGAP) può costituire un'alternativa più raffinata all'anonimizzazione tradizionale. Originariamente concepiti per l'elaborazione delle immagini, questi modelli catturano le distribuzioni di probabilità sottostanti agli insiemi di dati. Tali distribuzioni possono essere successivamente campionate, fornendo nuovi campioni di dati, non presenti nel set di dati originale. Tuttavia, la distribuzione complessiva degli insiemi di dati sintetici, costituiti da dati campionati in questo modo, è equivalente a quella del set dei dati originali. In questa tesi, verrà analizzato l'uso dei MGAP come tecnologia abilitante per una più ampia adozione dell'IA. A tal scopo, verrà ripercorsa prima di tutto la legislazione sulla privacy dei dati, con particolare attenzione a quella relativa all'Unione Europea. Nel farlo, forniremo anche una panoramica delle tecnologie tradizionali di anonimizzazione dei dati. Successivamente, verrà fornita un'introduzione all'IA e al deep-learning. Per illustrare i meriti di questo campo, vengono discussi due casi di studio: uno relativo alla segmentazione delle immagini ed uno reltivo alla diagnosi del cancro. Si introducono poi i MGAP, con particolare attenzione agli autoencoder variazionali. L'applicazione di questi metodi ai dati tabellari e relazionali costituisce una utile innovazione in questo campo che comporta l’introduzione di tecniche innovative di pre-elaborazione. Infine, verrà valutata la metodologia sviluppata attraverso esperimenti riproducibili, considerando sia l'utilità analitica che il grado di protezione della privacy attraverso metriche statistiche.Two trends have rapidly been redefining the artificial intelligence (AI) landscape over the past several decades. The first of these is the rapid technological developments that make increasingly sophisticated AI feasible. From a hardware point of view, this includes increased computational power and efficient data storage. From a conceptual and algorithmic viewpoint, fields such as machine learning have undergone a surge and synergies between AI and other disciplines have resulted in considerable developments. The second trend is the growing societal awareness around AI. While institutions are becoming increasingly aware that they have to adopt AI technology to stay competitive, issues such as data privacy and explainability have become part of public discourse. Combined, these developments result in a conundrum: AI can improve all aspects of our lives, from healthcare to environmental policy to business opportunities, but invoking it requires the use of sensitive data. Unfortunately, traditional anonymization techniques do not provide a reliable solution to this conundrum. They are insufficient in protecting personal data, but also reduce the analytic value of data through distortion. However, the emerging study of deep-learning generative models (DLGM) may form a more refined alternative to traditional anonymization. Originally conceived for image processing, these models capture probability distributions underlying datasets. Such distributions can subsequently be sampled, giving new data points not present in the original dataset. However, the overall distribution of synthetic datasets, consisting of data sampled in this manner, is equivalent to that of the original dataset. In our research activity, we study the use of DLGM as an enabling technology for wider AI adoption. To do so, we first study legislation around data privacy with an emphasis on the European Union. In doing so, we also provide an outline of traditional data anonymization technology. We then provide an introduction to AI and deep-learning. Two case studies are discussed to illustrate the field’s merits, namely image segmentation and cancer diagnosis. We then introduce DLGM, with an emphasis on variational autoencoders. The application of such methods to tabular and relational data is novel and involves innovative preprocessing techniques. Finally, we assess the developed methodology in reproducible experiments, evaluating both the analytic utility and the degree of privacy protection through statistical metrics

    Managing The Gig Economy Via Behavioral And Operational Lenses

    Get PDF
    This dissertation combines tools from operations management, econometrics, machine learning, and behavioral sciences to (i) study how on-demand workers learn and make decisions in complex environments, (ii) develop tools to help improve their decision-making, and (iii) inform the design of better policies to manage human-centered operations. Recent technologies create and accelerate new work arrangements that provide workers with flexibility in their schedule and choice of service. At the same time, the decisions a worker faces have become more complex. Platforms offer competing dynamic incentives, and the independent nature of gig work means that workers do not experience the benefits of learning from colleagues. The following three chapters investigate how behavioral operations management can be utilized to better manage the gig economy. (i) Behavioral and Economic Drivers of Decisions.We empirically investigate how on-demand workers decide on when to work and \emph{for how long} depending on varying financial incentives and personal goals. Using the comprehensive data from a ride-hailing industry partner, we develop an econometric framework that addresses empirical challenges such as sample selection bias and endogeneity. Our results demonstrate that, while workers exhibit positive income elasticity as predicted by standard income theory, their decisions are significantly influenced by their cumulative earnings (more likely to stop working when reaching their income goal) and recent work duration (tend to stay working after long hours of work or exhibit inertia ), more akin to the behavioral theory of labor supply. Inertia captures both the formation of work habits and the tendency to stay with the focal platform, suggesting that, amidst intensifying competition among platforms, platform loyalty could be induced through optimal incentive design. Thus, we propose a heuristic to optimize incentive allocation and demonstrate through counterfactual simulations the monetary and capacity benefits of accounting for our behavioral insights. (ii) Dynamic Decisions and Multihoming Behavior. We leverage proprietary data from our ride-hailing industry partner and the publicly available trip record data to develop and estimate a structural behavioral model of gig workers\u27 sequential dynamic decisions of when and where to work in the presence of alternative work opportunities. Our major contributions are in the modeling and estimation of dynamic decisions with temporal and spatial components and dynamic outside options, and the development of an efficient simulation-assisted machine learning-based estimation framework. Our results characterize gig workers\u27 forward-looking behavior and heterogeneous cost of working. We find that workers are strategic in their choice of initial service location to ensure high utilization and are prone to multihoming behavior when facing longer idle times. Then, we study how the firm can influence multihoming behavior among workers. Our counterfactual analyses demonstrate the effectiveness of strategies commonly used in practice and offer insights that can help retain workers during high demand or nudge them to quit during low demand. (iii) Improving Human Decision-Making with Machine Learning. We propose a novel machine-learning algorithm to automatically extract best practices from the trace data and infer simple tips that can help workers learn to make better decisions. We use an approach based on imitation learning and interpretable reinforcement learning and consider simple if-then-else rules that modify workers\u27 strategy in a way that most improve their performance, capture useful insights that are challenging for workers to learn by themselves, and are simple enough for workers to understand. To validate our approach and test the performance of our algorithm, we design a virtual kitchen-management game and conduct large-scale pre-registered behavioral studies on Amazon Mechanical Turk. Our experiments show that rules inferred from our algorithm are effective and significantly outperform rules from other sources at improving performance and speeding up learning among workers. In particular, we help workers identify optimal early actions that help them improve in the long term and discover additional optimal strategies beyond what is stated by our algorithm

    Operational Design Domain Monitoring and Augmentation for Autonomous Driving

    Get PDF
    Recent technological advances in Autonomous Driving Systems (ADS) show promise in increasing traffic safety. One of the critical challenges in developing ADS with higher levels of driving automation is to derive safety requirements for its components and monitor the system's performance to ensure safe operation. The Operational Design Domain (ODD) for ADS confines ADS safety to the context of its function. The ODD represents the operating environment within which an ADS operates and satisfies the safety requirements. To reach a state of "informed safety", the system's ODD must be explored and well-tested in the development phase. At the same time, the ADS must monitor the operating conditions and corresponding risks in real-time. Existing research and technologies do not directly express the ODD quantitatively, nor do they have a general monitoring strategy designed to handle the learning-based system, which is heavily used in the recent ADS technologies. The safety-critical nature of the ADS requires us to provide thorough validation, continual improvement, and safety monitoring of these data-driven dependent modules. In this dissertation, the ODD extraction, augmentation, and real-time monitoring of the ADS with machine learning components are investigated. There are three major components for the ODD of the ADS with machine learning components for general safety issues. In the first part, we propose a framework to systematically specify and extract the ODD, including the environment modeling and formal and quantitative safety specifications for models with machine learning parts. An empirical demonstration of the ODD extraction process based on predefined specifications is presented with the proposed environment model. In the second part, the ODD augmentation in the development phase is modelled as an iterative engineering problem solved by robust learning to handle unseen future natural variations. The vision tasks in ADS are the major focus here, and the effectiveness of model-based robustness training is demonstrated, which can improve model performance and the application of extracting edge cases during the iterative process. Furthermore, the testing procedure provides us with valuable priors on the probability of failures in the known testing environment, which can be further utilized in the real-time monitoring procedure. Finally, a solution for online ODD monitoring that utilizes the knowledge from the offline validation process as Bayesian graphical models to improve safety warning accuracy is provided. While the algorithms and techniques proposed in this dissertation can be applied to many safety-critical robotic systems with machine learning components, in this dissertation the main focus lies on the implications for autonomous driving

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p
    corecore